Facebook’s ethical failures are not accidental; they are part of the business model

  • Opinion Paper
  • Published: 05 June 2021
  • Volume 1 , pages 395–403, ( 2021 )

Cite this article

  • David Lauer   ORCID: orcid.org/0000-0002-0003-4521 1  

84k Accesses

19 Citations

782 Altmetric

31 Mentions

Explore all metrics

Avoid common mistakes on your manuscript.

Facebook’s stated mission is “to give people the power to build community and bring the world closer together.” But a deeper look at their business model suggests that it is far more profitable to drive us apart. By creating “filter bubbles”—social media algorithms designed to increase engagement and, consequently, create echo chambers where the most inflammatory content achieves the greatest visibility—Facebook profits from the proliferation of extremism, bullying, hate speech, disinformation, conspiracy theory, and rhetorical violence. Facebook’s problem is not a technology problem. It is a business model problem. This is why solutions based in technology have failed to stem the tide of problematic content. If Facebook employed a business model focused on efficiently providing accurate information and diverse views, rather than addicting users to highly engaging content within an echo chamber, the algorithmic outcomes would be very different.

Facebook’s failure to check political extremism, [ 15 ] willful disinformation, [ 39 ] and conspiracy theory [ 43 ] has been well-publicized, especially as these unseemly elements have penetrated mainstream politics and manifested as deadly, real-world violence. So it naturally raised more than a few eyebrows when Facebook’s Chief AI Scientist Yann LeCun tweeted his concern [ 32 ] over the role of right-wing personalities in downplaying the severity of the COVID-19 pandemic. Critics were quick to point out [ 29 ] that Facebook has profited handsomely from exactly this brand of disinformation. Consistent with Facebook’s recent history on such matters, LeCun was both defiant and unconvincing.

In response to a frenzy of hostile tweets, LeCun made the following four claims:

Facebook does not cause polarization or so-called “filter bubbles” and that “most serious studies do not show this.”

Critics [ 30 ] who argue that Facebook is profiting from the spread of misinformation—are “factually wrong.” Footnote 1

Facebook uses AI-based technology to filter out [ 33 ]:

Hate speech;

Calls to violence;

Bullying; and

Disinformation that endangers public safety or the integrity of the democratic process.

Facebook is not an “arbiter of political truth” and that having Facebook “arbitrate political truth would raise serious questions about anyone’s idea of ethics and liberal democracy.”

Absent from the claims above is acknowledgement that the company’s profitability depends substantially upon the polarization LeCun insists does not exist.

Facebook has had a profound impact on our access to ideas, information, and one another. It has unprecedented global reach, and in many markets serves as a de-facto monopolist. The influence it has over individual and global affairs is unique in human history. Mr. LeCun has been at Facebook since December 2013, first as Director of AI Research and then as Chief AI Scientist. He has played a leading role in shaping Facebook’s technology and approach. Mr. LeCun’s problematic claims demand closer examination. What follows, therefore, is a response to these claims which will clearly demonstrate that Facebook:

Elevates disinformation campaigns and conspiracy theories from the extremist fringes into the mainstream, fostering, among other effects, the resurgent anti-vaccination movement, broad-based questioning of basic public health measures in response to COVID-19, and the proliferation of the Big Lie of 2020—that the presidential election was stolen through voter fraud [ 16 ];

Empowers bullies of every size, from cyber-bullying in schools, to dictators who use the platform to spread disinformation, censor their critics, perpetuate violence, and instigate genocide;

Defrauds both advertisers and newsrooms, systematically and globally, with falsified video engagement and user activity statistics;

Reflects an apparent political agenda espoused by a small core of corporate leaders, who actively impede or overrule the adoption of good governance;

Brandishes its monopolistic power to preserve a social media landscape absent meaningful regulatory oversight, privacy protections, safety measures, or corporate citizenship; and

Disrupts intellectual and civil discourse, at scale and by design.

1 I deleted my Facebook account

I deleted my account years ago for the reasons noted above, and a number of far more personal reasons. So when LeCun reached out to me, demanding evidence for my claims regarding Facebook’s improprieties, it was via Twitter. What proof did I have that Facebook creates filter bubbles that drive polarization?

In anticipation of my response, he offered the claims highlighted above. As evidence of his claims, he directed my attention to a single research paper [ 23 ] that, on closer inspection, does not appear at all to reinforce his case.

The entire exchange also suggests that senior leadership at Facebook still suffers from a massive blindspot regarding the harm that its platform causes—that they continue to “move fast and break things” without regard for the global impact of their behavior.

LeCun’s comments confirm the concerns that many of us have held for a long time: Facebook has declined to resolve its systemic problems, choosing instead to paper over these deep philosophical flaws with advanced, though insufficient, technological solutions. Even when Facebook takes occasion to announce its triumphs in the ethical use of AI, such as its excellent work [ 8 ] detecting suicidal tendencies, its advancements pale in comparison to the inherent problems written into its algorithms.

This is because, fundamentally, their problem is not a failure of technology, nor a shortcoming in their AI filters. Facebook’s problem is its business model. Facebook makes superficial technology changes, but at its core, profits chiefly from engagement and virality. Study after study has found that “lies spread faster than the truth,” [ 47 ] “conspiracy theories spread through a more decentralized network,” [ 41 ] and that “politically extreme sources tend to generate more interactions from users.” Footnote 2 Facebook knows that the most efficient way to maximize profitability is to build algorithms that create filter bubbles and spread viral misinformation.

This is not a fringe belief or controversial opinion. This is a reality acknowledged even by those who have lived inside of Facebook’s leadership structure. As the former director of monetization for Facebook, Tim Kendall explained in his Congressional testimony, “social media services that I, and others have built, have torn people apart with alarming speed and intensity. At the very least we have eroded our collective understanding—at worst, I fear we are pushing ourselves to the brink of a civil war.” [ 38 ]

2 Facebook’s black box

To effectively study behavior on Facebook, we must be able to study Facebook’s algorithms and AI models. Therein lies the first problem. The data and transparency to do so are simply not there. Facebook does not practice transparency—they do not make comprehensive data available on their recommendation and filtering algorithms, or their other implementations of AI. One organization attempting to study the spread of misinformation, NYU’s Cybersecurity for Democracy, explains, “[o]ur findings are limited by the lack of data provided by Facebook…. Without greater transparency and access to data, such research questions are out of reach.” Footnote 3

Facebook’s algorithms and AI models are proprietary, and they are intentionally hidden from us. While this is normal for many companies, no other company has 2.85 billion monthly active users. Any platform that touches so many lives must be studied so that we can truly understand its impact. Yet Facebook does not make the kind of data available that is needed for robust study of the platform.

Facebook would likely counter this, and point to their partnership with Harvard’s Institute for Quantitative Social Science (Social Science One) as evidence that they are making data available to researchers [ 19 ]. While this partnership is one step in the right direction, there are several problems with this model:

The data are extremely limited. At the moment it consists solely of web page addresses that have been shared on Facebook for 18 months from 2017 to 2019.

Researchers have to apply for access to the data through Social Science One, which acts as a gatekeeper of the data.

If approved, researchers have to execute an agreement directly with Facebook.

This is not an open, scientific process. It is, rather, a process that empowers administrators to cherry-pick research projects that favor their perspective. If Facebook was serious about facilitating academic research, they would provide far greater access to, availability of, and insight into the data. There are legitimate privacy concerns around releasing data, but there are far better ways to address those concerns while fostering open, vibrant research.

3 Does Facebook cause polarization?

LeCun cited a single study as evidence that Facebook does not cause polarization. But do the findings of this study support Mr. LeCun’s claims?

The study concludes that “polarization has increased the most among the demographic groups least likely to use the Internet and social media.” The study does not, however, actually measure this type of polarization directly. Its primary data-gathering instrument—a survey on polarization—did not ask whether respondents were on the Internet or if they used social media. Instead, the study estimates whether an individual respondent is likely to be on the Internet based on an index of demographic factors which suggest “predicted” Internet use. As explained in the study, “the main predictor [they] focus on is age” [ 23 ]. Age is estimated to be negatively correlated with social media usage. Therefore, since older people are also shown to be more politically polarized, LeCun takes this as evidence that social media use does not cause polarization.

This assumption of causality is flawed. The study does not point to a causal relationship between these demographic factors and social media use. It simply says that these demographic factors drive polarization. Whether these factors have a correlational or causative relationship with the Internet and social media use is complete conjecture. The author of the study himself caveats any such conclusions, noting that “[t]hese findings do not rule out any effect of the internet or social media on political polarization.” [ 5 ].

Not only is LeCun’s assumption flawed, it is directly refuted by a recent Pew Research study [ 3 ] that found an overwhelmingly high percentage of US adults age 65 + are on Facebook (50%), the most of any social network. If anything, older age is actually more clearly correlated with Facebook use relative to other social networks.

Moreover, in 2020, the MIS Quarterly journal published a study by Steven L. Johnson, et al. that explored this problem and found that the “more time someone spends on Facebook, the more polarized their online news consumption becomes. This evidence suggests Facebook indeed serves as an echo chamber especially for its conservative users” [ 24 ].

Allcott, et al. also explores this question in “The Welfare Effects of Social Media” in November, 2019, beginning with a review of other studies confirming a relationship between social media use, well-being and political polarization [ 1 ]:

More recent discussion has focused on an array of possible negative impacts. At the individual level, many have pointed to negative correlations between intensive social media use and both subjective well-being and mental health. Adverse outcomes such as suicide and depression appear to have risen sharply over the same period that the use of smartphones and social media has expanded. Alter (2018) and Newport (2019), along with other academics and prominent Silicon Valley executives in the “time well-spent” movement, argue that digital media devices and social media apps are harmful and addictive. At the broader social level, concern has focused particularly on a range of negative political externalities. Social media may create ideological “echo chambers” among like-minded friend groups, thereby increasing political polarization (Sunstein 2001, 2017; Settle 2018). Furthermore, social media are the primary channel through which misinformation spreads online (Allcott and Gentzkow 2017), and there is concern that coordinated disinformation campaigns can affect elections in the US and abroad.

Allcott’s 2019 study uses a randomized experiment in the run-up to the November 2018 midterm elections to examine how Facebook affects several individual and social welfare measures. They found that:

deactivating Facebook for the four weeks before the 2018 US midterm election (1) reduced online activity, while increasing offline activities such as watching TV alone and socializing with family and friends; (2) reduced both factual news knowledge and political polarization; (3) increased subjective well-being; and (4) caused a large persistent reduction in post-experiment Facebook use.

In other words, not using Facebook for a month made you happier and resulted in less future usage. In fact, they say that “deactivation significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.” None of these findings would come as a surprise to anybody who works at Facebook.

“A former Facebook AI researcher” confirmed that they ran “‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization” [ 21 ]. Not only did Facebook know this, but they continued to design and build their recommendation algorithms to maximize user engagement, knowing that this meant optimizing for extremism and polarization. Footnote 4

Facebook understood what they were building according to Tim Kendall’s Congressional testimony in 2020. He explained that “we sought to mine as much attention as humanly possible and turn [sic] into historically unprecedented profits” [ 38 ]. He went on to explain that their inspiration was “Big Tobacco’s playbook … to make our offering addictive at the outset.” They quickly figured out that “extreme, incendiary content” directly translated into “unprecedented engagement—and profits.” He was the director of monetization for Facebook—few would have been better positioned to understand Facebook’s motivations, findings and strategy.

4 Engagement, filter bubbles, and executive compensation

The term “filter bubble” was coined by Eli Pariser who wrote a book with that title, exploring how social media algorithms are designed to increase engagement and create echo chambers where inflammatory posts are more likely to go viral. Filter bubbles are not just an algorithmic outcome; often we filter our own lives, surrounding ourselves with friends (online and offline) who are more likely to agree with our philosophical, religious and political views.

Social media platforms capitalize on our natural tendency toward filtered engagement. These platforms build algorithms, and structure executive compensation, [ 27 ] to maximize such engagement. By their very design, social media curation and recommendation algorithms are engineered to maximize engagement, and thus, are predisposed to create filter bubbles.

Facebook has long attracted criticism for its pursuit of growth at all costs. A recent profile of Facebook’s AI efforts details the difficulty of getting “buy-in or financial support when the work did not directly improve Facebook’s growth.” [ 21 ]. Andrew Bosworth, a Vice President at Facebook said in a 2016 memo that nothing matters but growth, and that “all the work we do in growth is justified” regardless of whether “it costs someone a life by exposing someone to bullies” or if “somebody dies in a terrorist attack coordinated on our tools” [ 31 ].

Bosworth and Zuckerberg went on to claim [ 36 ] that the shocking memo was merely an attempt at being provocative. Certainly, it succeeded in this aim. But what else could they really say? It’s not a great look. And it looks even worse when you consider that Facebook’s top brass really do get paid more when these things happen. The above-referenced report is based on interviews with multiple former product managers at Facebook, and shows that their executive compensation system is largely based around their most important metric–user engagement. This creates a perverse incentive. And clearly, by their own admission, Facebook will not allow a few casualties to get in the way of their executive compensation.

5 Is it incidental or intentional?

Yaël Eisenstat, a former CIA analyst who specialized in counter-extremism went on to work at Facebook out of concern that the social media platform was increasing radicalization and political polarization. She explained in a TED talk [ 13 ] that the current information ecosystem is manipulating its users, and that “social media companies like Facebook profit off of segmenting us and feeding us personalized content that both validates and exploits our biases. Their bottom line depends on provoking a strong emotion to keep us engaged, often incentivizing the most inflammatory and polarizing voices.” This emotional response results in more than just engagement—it results in addiction.

Eisenstat joined Facebook in 2018 and began to explore the issues which were most divisive on the social media platform. She began asking questions internally about what was causing this divisiveness. She found that “the largest social media companies are antithetical to the concept of reasoned discourse … Lies are more engaging online than truth, and salaciousness beats out wonky, fact-based reasoning in a world optimized for frictionless virality. As long as algorithms’ goals are to keep us engaged, they will continue to feed us the poison that plays to our worst instincts and human weaknesses.”

She equated Facebook’s algorithmic manipulation to the tactics that terrorist recruiters use on vulnerable youth. She offered Facebook a plan to combat political disinformation and voter suppression. She has claimed that the plan was rejected, and Eisenstat left after just six months.

As noted earlier, LeCun flatly denies [ 34 ] that Facebook creates filter bubbles that drive polarization. In sharp contrast, Eisenstat explains that such an outcome is a feature of their algorithm, not a bug. The Wall St. Journal reported that in 2018, senior executives at Facebook were informed of the following conclusions during an internal presentation [ 22 ]:

“Our algorithms exploit the human brain’s attraction to divisiveness… [and] if left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention and increase time on the platform.”

The platform aggravates polarization and tribal behavior.

Some proposed algorithmic changes would “disproportionately affect[] conservative users and publishers.”

Looking at data for Germany, an internal report found “64% of all extremist group joins are due to our recommendation tools … Our recommendation systems grow the problem.”

These are Facebook’s own words, and arguably, they provide the social media platform with an invaluable set of marketing prerogatives. They are reinforced by Tim Kendall’s testimony as discussed above.

“Most notably,” reported the WSJ, “the project forced Facebook to consider how it prioritized ‘user engagement’—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.” As noted in the section above, executive compensation was tied to “user engagement,” which meant product developers at Facebook were incentivized to design systems in this very way. Footnote 5

Mark Zuckerberg and Joel Kaplan reportedly [ 22 ] dismissed the conclusions from the 2018 presentation, calling efforts to bring greater civility to conversations on the social media platform “paternalistic.” Zuckerberg went on to say that he would “stand up against those who say that new types of communities forming on social media are dividing us.” Kaplan reportedly “killed efforts to build a classification system for hyperpolarized content.” Failing to address this has resulted in algorithms that, as Tim Kendall explained, “have brought out the worst in us. They have literally rewired our brains so that we are detached from reality and immersed in tribalism” [ 38 ].

Facebook would have us believe that it has made great strides in confronting these problems over just the last two years, as Mr. LeCun has claimed. But at present, the burden of proof is on Facebook to produce the full, raw data so that independent researchers can make a fair assessment of his claims.

6 The AI filter

According to LeCun’s tweets cited at the beginning of this paper, Facebook’s AI-powered filter cleanses the platform of:

Disinformation that endangers public safety or the integrity of the democratic process

These are his words, so we will refer to them even while the actual definitions of hate speech, calls to violence, and other terms are potentially controversial and open to debate.

These claims are provably false. While “AI” (along with some very large, manual curation operations in developing countries) may effectively filter some of this content, at Facebook’s scale, some is not enough.

Let’s examine the claims a little closer.

6.1 Does Facebook actually filter out hate speech?

An investigation by the UK-based counter-extremist organization ISD (Institute for Strategic Dialog) found that Facebook’s algorithm “actively promotes” Holocaust denial content [ 20 ]. The same organization, in another report, documents how Facebook’s “delays or mistakes in policy enforcement continue to enable hateful and harmful content to spread through paid targeted ads.” [ 17 ]. They go on to explain that “[e]ven when action is taken on violating ad content, such a response is often reactive and delayed, after hundreds, thousands, or potentially even millions of users have already been served those ads on their feeds.” Footnote 6

Zuckerberg admitted in April 2018 that hate speech in Myanmar was a problem, and pledged to act. Four months later, Reuters found more than “1000 examples of posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims that were on Facebook” [ 45 ]. As recently as June 2020 there were reports [ 7 ] of troll farms using Facebook to intimidate opponents of Rodrigo Duterte in the Philippines with death threats and hateful comments.

6.2 Does Facebook actually filter out calls to violence?

The Sri Lankan government had to block access to Facebook “amid a wave of violence against Muslims … after Facebook ignored years of calls from both the government and civil society groups to control ethnonationalist accounts that spread hate speech and incited violence.” [ 42 ] A report from the Center for Policy Alternatives in September 2014 detailed evidence of 20 hate groups in Sri Lanka, and informed Facebook. In March of 2018, Buzzfeed reported that “16 out of the 20 groups were still on Facebook”. Footnote 7

When former President Trump tweeted, in response to Black Lives Matters protests, when “the looting starts, the shooting starts,” the message was liked and shared hundreds of thousands of times across Facebook and Instagram, even as other social networks such as Twitter flagged the message for its explicit incitement of violence [ 48 ] and prevented it from being retweeted.

Facebook played a pivotal role in the planning of the January 6th insurrection in the US, providing an unchecked platform for proliferation of the Big Lie, radicalization around this lie, and coordinated organization around explicitly-stated plans to engage in violent confrontation at the nation’s capital on the outgoing president’s behalf. Facebook’s role in the deadly violence was far greater and more widespread than the role of Parler and the other fringe right-wing platforms that attracted so much attention in the aftermath of the attack [ 11 ].

6.3 Does Facebook actually filter out cyberbullying?

According to Enough Is Enough, a non-partisan, non-profit organization whose mission is “making the Internet safer for children and families,” the answer is a resounding no. According to their most recent cyberbullying statistics, [ 10 ] 47% of young people have been bullied online, and the two most prevalent platforms are Instagram at 42% and Facebook at 37%.

In fact, Facebook is failing to protect children on a global scale. According to a UNICEF poll of children in 30 countries, one in every three young people says that they have been victimized by cyberbullying. And one in five says the harassment and threat of actual violence caused them to skip school. According to the survey, conducted in concert with the UN Special Representative of the Secretary-General (SRSG) on Violence against Children, “almost three-quarters of young people also said social networks, including Facebook, Instagram, Snapchat and Twitter, are the most common place for online bullying” [ 49 ].

6.4 Does Facebook actually filter out “disinformation that endangers public safety or the integrity of the democratic process?”

To list the evidence contradicting this point would be exhausting. Below are just a few examples:

The Computational Propaganda Research Project found in their 2019 Global Inventory of Organized Social Media Manipulation that 70 countries had disinformation campaigns organized on social media in 2019, with Facebook as the top platform [ 6 ].

A Facebook whistleblower produced a 6600 word memo detailing case after case of Facebook “abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe.” [ 44 ]

Facebook is ground-zero for anti-vaccination and pandemic misinformation, with the 26-min conspiracy theory film “Plandemic” going viral on Facebook in April 2020 and garnering tens of millions of views. Facebook’s attempt to purge itself of anti-vaccination disinformation was easily thwarted when the groups guilty of proliferating this content removed the word “vaccine” from their names. In addition to undermining public health interests by spreading provably false content, these anti-vaccination groups have obscured meaningful discourse about the actual health concerns and risks that may or may not be connected to vaccinations. A paper from May 2020 attempts to map out the “multi-sided landscape of unprecedented intricacy that involves nearly 100 million individuals” [ 25 ] that are entangled with anti-vaccination clusters. That report predicts that such anti-vaccination views “will dominate in a decade” given their explosive growth and intertwining with undecided people. According to the Knight Foundation and Gallup, [ 26 ] 75% of Americans believe they “were exposed to misinformation about the election” on Facebook during the 2020 US presidential election. This is one of those rare issues on which Republicans (76%), Democrats (75%) and Independents (75%) agree–Facebook was the primary source for election misinformation.

If those AI filters are in fact working, they are not working very well.

All of this said, Facebook’s reliance on “AI filters” misses a critical point, which is that you cannot have AI ethics without ethics [ 30 ]. These problems cannot be solved with AI. These problems cannot be solved with checklists, incremental advances, marginal changes, or even state-of-the-art deep learning networks. These problems are caused by the company’s entire business model and mission. Bosworth’s provocative quotes above, along with Tim Kendall’s direct testimony demonstrate as much.

These are systemic issues, not technological ones. Yael Eisenstat put it best in her TED talk: “as long as the company continues to merely tinker around the margins of content policy and moderation, as opposed to considering how the entire machine is designed and monetized, they will never truly address how the platform is contributing to hatred, division and radicalization.”

7 Facebook does not want to be the arbiter of truth

We should probably take comfort in Facebook’s claim that it does not wish to be the “arbiter of political truth.” After all, Facebook has a troubled history with the truth. Their ad buying customers proved as much when Facebook was forced to pay $40 million to settle a lawsuit alleging that they had inflated “by up to 900 percent—the time it said users spent watching videos.” [ 4 ] While Facebook would neither admit nor deny the truth of this allegation, they did admit to the error in a 2016 statement [ 14 ].

This was not some innocuous lie that just cost a few firms some money either. As Slate explained in a 2018 article, “many [publications] laid off writers and editors and cut back on text stories to focus on producing short, snappy videos for people to watch in their Facebook feeds.” [ 40 ] People lost their livelihoods to this deception.

Is this an isolated incident? Or is fraud at Facebook systemic? Matt Stoller describes the contents of recently unsealed legal documents [ 12 ] in a lawsuit alleging Facebook has defrauded advertisers for years [ 46 ]:

The documents revealed that Facebook COO Sheryl Sandberg directly oversaw the alleged fraud for years. The scheme was simple. Facebook deceived advertisers by pretending that fake accounts represented real people, because ad buyers choose to spend on ad campaigns based on where they think their customers are. Former employees noted that the corporation did not care about the accuracy of numbers as long as the ad money was coming in. Facebook, they said, “did not give a shit.” The inflated statistics sometimes led to outlandish results. For instance, Facebook told advertisers that its services had a potential reach of 100 million 18–34-year-olds in the United States, even though there are only 76 million people in that demographic. After employees proposed a fix to make the numbers honest, the corporation rejected the idea, noting that the “revenue impact” for Facebook would be “significant.” One Facebook employee wrote, “My question lately is: how long can we get away with the reach overestimation?” According to these documents, Sandberg aggressively managed public communications over how to talk to advertisers about the inflated statistics, and Facebook is now fighting against her being interviewed by lawyers in a class action lawsuit alleging fraud.

Facebook’s embrace of deception extends from its ad-buying fraud to the content on its platforms. For instance:

Those who would “aid[] and abet[] the spread of climate misinformation” on Facebook benefit from “a giant loophole in its fact-checking program.” Evidently, Facebook gives its staff the power to overrule climate scientists by deeming climate disinformation “opinion.” [ 2 ].

The former managing editor of Snopes reported that Facebook was merely using the well-regarded fact-checking site for “crisis PR,” that they did not take fact checking seriously and would ignore concerns [ 35 ]. Snopes tried hard to push against the Myanmar disinformation campaign, amongst many other issues, but its concerns were ignored.

ProPublica recently reported [ 18 ] that Sheryl Sandberg silenced and censored a Kurdish militia group that “the Turkish government had targeted” in order to safeguard their revenue from Turkey.

Mark Zuckerberg and Joel Kaplan intervened [ 37 ] in April 2019 to keep Alex Jones on the platform, despite the right-wing conspiracy theorist’s lead role in spreading disinformation about the 2012 Sandy Hook elementary school shooting and the 2018 Parkland high school shooting.

Arguably, Facebook’s executive team has not only ceded responsibility as an “arbiter of truth,” but has also on several notable occasions, intervened to ensure the continued proliferation of disinformation.

8 How do we disengage?

Facebook’s business model is focused entirely on increasing growth and user engagement. Its algorithms are extremely effective at doing so. The steps Facebook has taken, such as building “AI filters” or partnering with independent fact checkers, are superficial and toothless. They cannot begin to untangle the systemic issues at the heart of this matter, because these issues are Facebook’s entire reason for being.

So what can be done? Certainly, criminality needs to be prosecuted. Executives should go to jail for fraud. Social media companies, and their organizational leaders, should face legal liability for the impact made by the content on their platforms. One effort to impose legal liability in the US is centered around reforming section 230 of the US Communications Decency Act. It, and similar laws around the world, should be reformed to create far more meaningful accountability and liability for the promotion of disinformation, violence, and extremism.

Most importantly, monopolies should be busted. Existing antitrust laws should be used to break up Facebook and restrict its future activities and acquisitions.

The matters outlined here have been brought to the attention of Facebook’s leadership in countless ways that are well documented and readily provable. But the changes required go well beyond effective leveraging of AI. At its heart, Facebook will not change because they do not want to, and are not incentivized to. Facebook must be regulated, and Facebook’s leadership structure must be dismantled.

It seems unlikely that politicians and regulators have the political will to do all of this, but there are some encouraging signs, especially regarding antitrust investigations [ 9 ] and lawsuits [ 28 ] in both the US and Europe. Still, this issue goes well beyond mere enforcement. Somehow we must shift the incentives for social media companies, who compete for, and monetize, our attention. Until we stop rewarding Facebook’s illicit behavior with engagement, it’s hard to see a way out of our current condition. These companies are building technology that is designed to draw us in with problematic content, addict us to outrage, and ultimately drive us apart. We no longer agree on shared facts or truths, a condition that is turning political adversaries into bitter enemies, that is transforming ideological difference into seething contempt. Rather than help us lead more fulfilling lives or find truth, Facebook is helping us to discover enemies among our fellow citizens, and bombarding us with reasons to hate them, all to the end of profitability. This path is unsustainable.

The only thing Facebook truly understands is money, and all of their money comes from engagement. If we disengage, they lose money. If we delete, they lose power. If we decline to be a part of their ecosystem, perhaps we can collectively return to a shared reality.

Facebook executives have, themselves, acknowledged that Facebook profits from the spread of misinformation: https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news .

Cybersecurity for Democracy. (March 3, 2021). “Far-right news sources on Facebook more engaging.” https://medium.com/cybersecurity-for-democracy/far-right-news-sources-on-facebook-more-engaging-e04a01efae90 .

Facebook claims to have since broadened the metrics it uses to calculate executive pay, but to what extent this might offset the prime directive of maximizing user engagement is unclear.

Allcot, H., et al.: “The Welfare Effects of Social Media.” (2019). https://web.stanford.edu/~gentzkow/research/facebook.pdf

Atkin, E.: Facebook creates fact-checking exemption for climate deniers. Heated . (2020). https://heated.world/p/facebook-creates-fact-checking-exemption

Auxier, B., Anderson, M.: Social Media Use in 2021. Pew Research Center. (2021). https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2021/04/PI_2021.04.07_Social-Media-Use_FINAL.pdf

Baron, E.: Facebook agrees to pay $40 million over inflated video-viewing times but denies doing anything wrong. The Mercury News . (2019). https://www.mercurynews.com/2019/10/07/facebook-agrees-to-pay-40-million-over-inflated-video-viewing-times-but-denies-doing-anything-wrong/

Boxell, L.: “The internet, social media, and political polarisation.” (2017). https://voxeu.org/article/internet-social-media-and-political-polarisation

Bradshaw, S., Howard, P.N.: The Global Disinformation Disorder: 2019 Global Inventory of Organised Social Media Manipulation. Working Paper 2019.2. Oxford: Project on Computational Propaganda. (2019)

Cabato, R.: Death threats, clone accounts: Another day fighting trolls in the Philippines. The Washington Post . (2020). https://www.washingtonpost.com/world/asia_pacific/facebook-trolls-philippines-death-threats-clone-accounts-duterte-terror-bill/2020/06/08/3114988a-a966-11ea-a43b-be9f6494a87d_story.html

Card, C.: “How Facebook AI Helps Suicide Prevention.” Facebook. (2018). https://about.fb.com/news/2018/09/inside-feed-suicide-prevention-and-ai/

Chee, F.Y.: “Facebook in EU antitrust crosshairs over data collection.” Reuters. (2019). https://www.reuters.com/article/us-eu-facebook-antitrust-idUSKBN1Y625J

Cyberbullying Statistics. Enough Is Enough. https://enough.org/stats_cyberbullying

Dwoskin, E.: Facebook’s Sandberg deflected blame for Capitol riot, but new evidence shows how platform played role. The Washington Post . (2021). https://www.washingtonpost.com/technology/2021/01/13/facebook-role-in-capitol-protest

DZ Reserve and Cain Maxwell v. Facebook, Inc. (2020). https://www.economicliberties.us/wp-content/uploads/2021/02/2021.02.17-Unredacted-Opp-to-Mtn-to-Dismiss.pdf

Eisenstat, Y.: Dear Facebook, this is how you’re breaking democracy [Video]. TED . (2020). https://www.ted.com/talks/yael_eisenstat_dear_facebook_this_is_how_you_re_breaking_democracy#t-385134

Fischer, D.: Facebook Video Metrics Update. Facebook . (2016). https://www.facebook.com/business/news/facebook-video-metrics-update

Fisher, M., Taub, A.: “How Everyday Social Media Users Become Real-World Extremists.” New York Times . (2018). https://www.nytimes.com/2018/04/25/world/asia/facebook-extremism.html

Frenkel, S.: “How Misinformation ‘Superspreaders’ Seed False Election Theories”. New York Times . (2020). https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html

Gallagher, A.: Profit and Protest: How Facebook is struggling to enforce limits on ads spreading hate, lies and scams about the Black Lives Matter protests . The Institute for Strategic Dialogue (2020)

Gillum, J., Ellion, J.: Sheryl Sandberg and Top Facebook Execs Silenced an Enemy of Turkey to Prevent a Hit to the Company’s Business. ProPublica . (2021). https://www.propublica.org/article/sheryl-sandberg-and-top-facebook-execs-silenced-an-enemy-of-turkey-to-prevent-a-hit-to-their-business

Gonzalez, R.: “Facebook Opens Its Private Servers to Scientists Studying Fake News.” Wired . (2018). https://www.wired.com/story/social-science-one-facebook-fake-news/

Guhl, J., Davey, J.: Hosting the ‘Holohoax’: A Snapshot of Holocaust Denial Across Social Media . The Institute for Strategic Dialogue (2020).

Hao, K.: “How Facebook got addicted to spreading misinformation”. MIT Technology Review . (2021). https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation

Horwitz, J., Seetharaman, D.: “Facebook Executives Shut Down Efforts to Make the Site Less Divisive.” Wall St Journal (2020)

Internet use and political polarization, Boxell, L., Gentzkow, M., Shapiro, J.M.: Proc Natl. Acad. Sci. 114 (40), 10612–10617 (2017). https://doi.org/10.1073/pnas.1706588114

Johnson, S.L., et al.: Understanding echo chambers and filter bubbles: the impact of social media on diversification and partisan shifts in news consumption. MIS Q. (2020). https://doi.org/10.25300/MISQ/2020/16371

Article   Google Scholar  

Johnson, N.F., Velásquez, N., Restrepo, N.J., et al.: The online competition between pro- and anti-vaccination views. Nature 582 , 230–233 (2020). https://doi.org/10.1038/s41586-020-2281-1

Jones, J.: In Election 2020, How Did The Media, Electoral Process Fare? Republicans, Democrats Disagree. Knight Foundation . (2020). https://knightfoundation.org/articles/in-election-2020-how-did-the-media-electoral-process-fare-republicans-democrats-disagree

Kantrowitz, A.: “Facebook Is Still Prioritizing Scale Over Safety.” Buzzfeed.News . (2019). https://www.buzzfeednews.com/article/alexkantrowitz/after-years-of-scandal-facebooks-unhealthy-obsession-with

Kendall, B., McKinnon, J.D.: “Facebook Hit With Antitrust Lawsuits by FTC, State Attorneys General.” Wall St. Journal. (2020). https://www.wsj.com/articles/facebook-hit-with-antitrust-lawsuit-by-federal-trade-commission-state-attorneys-general-11607543139

Lauer, D.: [@dlauer]. And yet people believe them because of misinformation that is spread and monetized on facebook [Tweet]. Twitter. (2021). https://twitter.com/dlauer/status/1363923475040251905

Lauer, D.: You cannot have AI ethics without ethics. AI Ethics 1 , 21–25 (2021). https://doi.org/10.1007/s43681-020-00013-4

Lavi, M.: Do Platforms Kill? Harvard J. Law Public Policy. 43 (2), 477 (2020). https://www.harvard-jlpp.com/wp-content/uploads/sites/21/2020/03/Lavi-FINAL.pdf

LeCun, Y.: [@ylecun]. Does anyone still believe whatever these people are saying? No one should. Believing them kills [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1363923178519732230

LeCun, Y.: [@ylecun]. The section about FB in your article is factually wrong. For starter, AI is used to filter things like hate speech, calls to violence, bullying, child exploitation, etc. Second, disinformation that endangers public safety or the integrity of the democratic process is filtered out [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1364010548828987393

LeCun, Y.: [@ylecun]. As attractive as it may seem, this explanation is false. [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1363985013147115528

Levin, S.: ‘They don’t care’: Facebook factchecking in disarray as journalists push to cut ties. The Guardian . (2018). https://www.theguardian.com/technology/2018/dec/13/they-dont-care-facebook-fact-checking-in-disarray-as-journalists-push-to-cut-ties

Mac, R.: “Growth At Any Cost: Top Facebook Executive Defended Data Collection In 2016 Memo—And Warned That Facebook Could Get People Killed.” Buzzfeed.News . (2018). https://www.buzzfeednews.com/article/ryanmac/growth-at-any-cost-top-facebook-executive-defended-data

Mac, R., Silverman, C.: “Mark Changed The Rules”: How Facebook Went Easy On Alex Jones And Other Right-Wing Figures. BuzzFeed.News . (2021). https://www.buzzfeednews.com/article/ryanmac/mark-zuckerberg-joel-kaplan-facebook-alex-jones

Mainstreaming Extremism: Social Media’s Role in Radicalizing America: Hearings before the Subcommittee on Consumer Protection and Commerce of the Committee on Energy and Commerce, 116th Cong. (2020) (testimony of Tim Kendall)

Meade, A.: “Facebook greatest source of Covid-19 disinformation, journalists say”. The Guardian . (2020). https://www.theguardian.com/technology/2020/oct/14/facebook-greatest-source-of-covid-19-disinformation-journalists-say

Oremus, W.: The Big Lie Behind the “Pivot to Video”. Slate . (2018). https://slate.com/technology/2018/10/facebook-online-video-pivot-metrics-false.html

Propagating and Debunking Conspiracy Theories on Twitter During the 2015–2016 Zika Virus Outbreak, Michael J. Wood, Cyberpsychology, Behavior, and Social Networking. 21 (8), (2018). https://doi.org/10.1089/cyber.2017.0669

Rajagopalan, M., Nazim, A.: “We Had To Stop Facebook”: When Anti-Muslim Violence Goes Viral. BuzzFeed.News . (2018). https://www.buzzfeednews.com/article/meghara/we-had-to-stop-facebook-when-anti-muslim-violence-goes-viral

Rosalsky, G.: “Are Conspiracy Theories Good For Facebook?”. Planet Money . (2020). https://www.npr.org/sections/money/2020/08/04/898596655/are-conspiracy-theories-good-for-facebook

Silverman, C., Mac, R.: “I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation. BuzzFeed.News . (2020). https://www.buzzfeednews.com/article/craigsilverman/facebook-ignore-political-manipulation-whistleblower-memo

Stecklow, S.: Why Facebook is losing the way on hate speech in Myanmar. Reuters . (2018). https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/

Stoller, M.: Facebook: What is the Australian law? And why does FB keep getting caught for fraud?. Substack. (2021). https://mattstoller.substack.com/p/facecrook-dealing-with-a-global-menace

The spread of true and false news online, Soroush Vosoughi, Deb Roy, Sinan Aral, Science. 359 (6380), 1146–1151. https://doi.org/10.1126/science.aap9559

The White House 45 Archived [@WhiteHouse45]: “These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!” [Tweet]. Twitter. (2020) https://twitter.com/WhiteHouse45/status/1266342941649506304

UNICEF.: UNICEF poll: More than a third of young people in 30 countries report being a victim of online bullying. (2019). https://www.unicef.org/press-releases/unicef-poll-more-third-young-people-30-countries-report-being-victim-online-bullying

Download references

Author information

Authors and affiliations.

Urvin AI, 413 Virginia Ave, Collingswood, NJ, 08107, USA

David Lauer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David Lauer .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Lauer, D. Facebook’s ethical failures are not accidental; they are part of the business model. AI Ethics 1 , 395–403 (2021). https://doi.org/10.1007/s43681-021-00068-x

Download citation

Received : 13 April 2021

Accepted : 29 May 2021

Published : 05 June 2021

Issue Date : November 2021

DOI : https://doi.org/10.1007/s43681-021-00068-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

TechRepublic

Account information.

facebook business ethics case study

Share with Your Friends

Facebook, bad news and business ethics: Consider the consequences of every decision

Your email has been sent

Image of Patrick Gray

Facebook has been subject to significant scrutiny lately, as The Wall Street Journal’s story, The Facebook Files , detailed how the company allegedly put user engagement and profits well ahead of a variety of significant social harms. The source of the material was a former Facebook employee, who apparently trolled the company’s internal sharing tools and found all manner of documents and research that the company commissioned, identified these problems and ultimately filed away, in the digital equivalent of a dusty and disused bookshelf.

SEE: Policy pack: Workplace ethics (TechRepublic Premium)

The front page of the newspaper test

Most of us have heard the old trope that “if you wouldn’t want something to appear on the front page of the newspaper, you shouldn’t send it,” and that applies in the literal sense with the Facebook story, as well as recent news of former NFL coach Jon Gruden’s email correspondence . In the case of allegations as shocking and profound as these, not only should a leader not want that information to appear in the news, but the leader should also fear reports of inaction or sheer ignorance of the problem being reported, as is the case with Facebook.

It’s not hard to imagine that a strong focus on technical prowess and capturing market share would make it easy to dismiss or ignore concerns raised by internal researchers and investigators. A company’s culture can also promote such willful ignorance. As technologists, it’s often too easy to dismiss ethics as a concern best left to academics and philosophers or regard them as a concern that’s “above my pay grade.”

SEE: WSJ’s Facebook series: Leadership lessons about ethical AI and algorithms (TechRepublic)

However, as technology leaders, we’re increasingly in a position to observe and direct human interactions in an unprecedented manner. Our algorithms are no longer supporting characters in how a business runs, but often are the core asset of a company or the “canary in the coal mine” that drives and identifies how our company operates and behaves in intended and unintended ways. The case of Facebook shines a light on how the pursuit of a business KPI like user engagement can create a raft of unintended ethical consequences.

When you come across these unintended consequences, try to imagine how it would look if your name was in a news story about how your company ignored unethical practices. How might your justifications about lack of time, demands to meet your metrics, or “hey, I’m just the tech person,” look when presented as someone who saw evidence of bad behavior and willfully ignored it? Would you rather be featured in this hypothetical story as the person who aided and abetted the behavior or the one who at least attempted to address it with colleagues and other executives?

How to start a conversation about potentially unethical behavior

The most frequent excuse for avoiding conversations about unethical behavior is worrying about one’s standing in the organization. Nobody wants to be perceived as a troublemaker or as someone who routinely reports malfeasance when none exists. Rather than coming to colleagues with accusations or forecasts of doom and gloom, approach the situation as a business problem to be solved and seek to form a team to solve it. Present the concern and use some variation of the “newspaper test” to provide a means for discussing the harm generated by the ethical problem that takes individual actors out of the equation and shifts the discussion away from blame allocation and towards problem-solving.

Don’t be afraid to test your concerns with colleagues outside the impacted areas who have a vested interest in the success of your organization and know the culture but may not be as intimately involved with the matter at hand. If your concerns are either dismissed outright or you receive a hostile response, then you have an extremely important data point on the value your organization places on ethical behavior and one that should guide your individual ethical calculations on how to proceed.

Most large organizations have an internal or external whistleblower hotline. The term whistleblower can get negative connotations but think of it as the whistle of a referee, pausing play in order to make sure the rules are being followed appropriately.

SEE: Whistleblower policy (TechRepublic Premium)

If you’re still uncomfortable sharing your concerns internally, either through your management channels, uninvolved colleagues or whistleblower resources, another option is getting outside eyes. If you pursue this route, be aware that the type of external resource may bias the feedback you receive. For example, suppose you engage your technical partners to assess the ethics of your algorithm; they may return an assessment that focuses too much on the technology and not enough on your ethical concerns. Outside counsel may focus on the legality of the activity in question, and obviously, things that are legal are not always ethical or items you’d like to see on the front page of the newspaper. An appropriate option might be to engage people in academia. Professors of business ethics should have a foundational knowledge of various ethical frameworks, a reasonably current understanding of the market and business climate, and not be overly biased in assessing your tech or the legal nuances.

Address ethical issues as early as possible

Without fail, when stories like what’s happening at Facebook and other social media companies break, other people emerge from the shadows and share the concerns of the whistleblower, lamenting that “I should have done something.” There are many good options before testifying in front of Congress, so strive to be the leader who identifies and mitigates ethical concerns before they escalate to former employees on the front page of the newspaper.

Subscribe to the Executive Briefing Newsletter

Discover the secrets to IT leadership success with these tips on project management, budgets, and dealing with day-to-day challenges. Delivered Tuesdays and Thursdays

  • Meta CEO Zuckerberg predicts the metaverse will be mainstream in 5-10 years
  • How to become a CIO: A cheat sheet
  • Working from home: How to get remote right (free PDF)
  • CXO: More must-read coverage

Image of Patrick Gray

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

WashU Experts: Facebook controversy raises ethical questions for corporations

facebook business ethics case study

This week, Facebook whistleblower Frances Haugen testified about the tens of thousands of pages of internal documents she leaked exposing how Facebook prioritized profits over the public’s safety and called on lawmakers to regulate the social media network.

By bringing to light the consequences of Facebook’s algorithms, Haugen’s testimony has forced corporations to rethink their relationship with Facebook and use of consumer data, according to digital media experts at Olin Business School at Washington University in St. Louis.

“Most advertisers who invest in Facebook or other social media platforms are aware of the ways in which these technologies collect and use customer data to improve the ROI (return on investment) of advertising dollars. In fact, these capabilities are positioned as a selling point,” said Michael Wall , professor of marketing practice and co-director of the Center for Analytics and Business Insights at Olin Business School.

facebook business ethics case study

“That said, other aspects of how Facebook and others drive great returns for their advertisers have been hidden within their algorithms. The whistleblower has changed that. Advertisers are now aware, and they will now be faced with decisions related to both the ethical use of data and being values-based.” 

According to Wall, business leaders should be thinking hard about how their firms — many of whom have become dependent on platforms such as Facebook for business growth — will use customer data responsibly.

“Certainly, the amount of users on these platforms is appealing in that it enables marketers the ability to reach a lot of consumers. That said, the real value is driven by the algorithms within these platforms that track everything we say and do to pinpoint which of those users should see our content, when and how many times,” Wall said.

This raises ethical questions about what is appropriate to not only track, but also share with third parties — some of whom use the data to advertise and track consumers beyond the original platforms. It’s easy to focus on Facebook given its behemoth size, but any company using consumer data is at risk of causing harm to its consumers.

facebook business ethics case study

“Every organization with access to rich consumer data, using Facebook as an advertising vehicle or not, must at least from time to time confront the dilemma: should some information be used to improve the profit line in the short run even though it might not be in the best interest of a consumer?” said Yulia Nevskaya, assistant professor of marketing at Olin Business School.

“It is a difficult situation to manage for a brand, given that the interests of a particular manager might not always align well with a long-term success of a brand. Implementing data-driven and values-based culture and decision-making is key.”

Consumers are demanding change

This is not the first time Facebook has found itself in the hot seat for its handling of user data, misinformation and other threats to American democracy. The more customers, companies and government have learned about social media, the more pushback has been generated from each stakeholder, Wall said.

Change is already underway. For example, Apple’s recent feature with its iOS 14.5 update notifies customers that apps are tracking their data and gives consumers the ability to block said tracking.

“This was a massive blow to Facebook, among others, who rely on that tracking to drive more advertising revenue,” Wall said. “Apple isn’t the only one. Google is also preparing to block third-party cookie tracking. These industry actions, coupled with government policies such as GDPR (General Data Protection Regulation) and the California Privacy Rights Act, will make the use of customer data more difficult in years to come.” 

Companies have a choice: Short-term profits or long-term growth

Limiting the use of consumer data or cutting ties with Facebook altogether may seem like an unfathomable choice for businesses. However, taking a stand now could pay dividends down the road.

“Research over the last several years has shown that customers prefer buying from companies that are aligned with their personal values ,” Wall said.

In 2018, Nike was one of the first major brands to take a controversial stand with its Colin Kaepernick commercial. Since then, many more companies have taken stands on social issues that align with their brand values, such as racial injustice, voting rights, gun laws, climate change and LGBTQ rights.

“As a marketer, my position is that brand equity is ultimately not driven by advertising. Furthermore, it is something we certainly cannot control. Instead, our brand is something we steward,” Wall said.

“This stewardship is driven by choices we make, which drive the actions we take, and together they lead to consequences in the market. Leaders must make tough choices about near-term growth and long-term growth. The wrong choices today may enable more profit today but may also lead to decreases down the road.” 

Consumers literacy is essential

Social media are new, powerful and complex players and we, as a society and as individuals, have to get tooled very quickly to live with them, Nevskaya said.

“Social media shapes our world, our information bubble and our choices. We now know that our Facebook feed is carefully calculated by algorithms that decide which political opinions, sources of information and products are most likely to elicit a response from us,” she said.

Facebook and other social media companies — possibly with the help of regulators — have a responsibility to confront the ethical dilemma of their business. At the same time, consumers need access to reliable information about the ways in which social media impacts their lives.

“Consumer literacy should be taken seriously and implemented in a comprehensive way, starting at an early age,” Nevskaya said.   

“Over the last two decades, as the situation with Facebook illustrates, companies and organizations developed extremely sophisticated tools to advertise and promote their products and ideas,” Nevskaya said. “Gone are the days when a television ad for a major brand consisted of mostly repeating the name of the brand many times in a loud voice, which marketers believed would make the consumer remember the product and buy it.

“Consumers are smart, but they need to be fully aware of the new methods, how exactly their personal information is used by organizations and to be offered very concrete tips on navigating modern marketing.”

Comments and respectful dialogue are encouraged, but content will be moderated. Please, no personal attacks, obscenity or profanity, selling of commercial products, or endorsements of political candidates or positions. We reserve the right to remove any inappropriate comments. We also cannot address individual medical concerns or provide medical advice in this forum.

You Might Also Like

Consumer values, brand expectations change in 2020

Latest from the Newsroom

Recent stories.

International trials underway for childhood malnutrition therapy developed at WashU

WashU works to protect migrating birds

Lemur’s lament

WashU Experts

DeFake tool protects voice recordings from cybercriminals

Tremor a reminder that East Coast, Midwest earthquake threat is real

NASPA chair, WashU vice chancellor on the future of student affairs

WashU in the News

Hazards of sports gambling: NBA bans Jontay Porter for life

Sleep Apnea Reduced in People Who Took Weight-Loss Drug, Eli Lilly Reports

Bladder Botox isn’t what it sounds like. Here’s why the procedure can be life changing.

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

President Biden.

Younger votes still lean toward Biden — but it’s complicated

Hahrie Han delivers the 2024 Tanner Lectures on Human Values.

Posting your opinion on social media won’t save democracy, but this might

facebook business ethics case study

Environmental law expert voices warning over Supreme Court

Joshua Grerene.

Joshua Greene says lawmakers should move forward with regulations against Facebook.

Stephanie Mitchell/Harvard file photo

Facebook’s moral quandary

Colleen Walsh

Harvard Staff Writer

Harvard psychologist Joshua Greene explains social media giant’s trolley problem

Testimony by former Facebook employee Frances Haugen, who holds a degree from Harvard Business School, and a series in the Wall Street Journal have left many, including Joshua Greene , Harvard professor of psychology, calling for stricter regulation of the social media company. Greene, who studies moral judgment and decision-making and is the author of “Moral Tribes: Emotion, Reason, and the Gap Between Us and Them,” says Facebook executives’ moral emotions are not well-tuned to the consequences of their decisions, a common human frailty that can lead to serious social harms. Among other things, the company has been accused of stoking division through the use of algorithms that promote polarizing content and ignoring the toxic effect its Instagram app has on teenage girls. In an interview with the Gazette, Greene discussed how his work on moral dilemmas can be applied to Facebook. The interview has been edited for length and clarity.

Do companies like Facebook have a moral responsibility to their users and to the public at large?

Absolutely. These companies have enormous power over people’s lives. In my view, anyone who has that kind of power has an obligation to exercise it responsibly. And part of that may mean relinquishing some of that power. Unfortunately, when these companies make disastrous choices, their actions don’t feel disastrous to them — at least not until they get the blowback from the people who’ve been harmed. As decision-makers, their emotions are not tuned to the gravity and scope of the moral problem.

Is there an example from your field of research that can clarify what is happening at Facebook?

The most emotionally salient moral transgressions are basic acts of physical violence, things like punching someone in the face. When Facebook harms people, the people making those decisions don’t feel like they’re punching someone in the face. This is because the harm is caused passively rather than actively, because the harm is caused as a side-effect rather than intentionally, and because the harm is caused very indirectly — mediated by both technology and the actions of other people. Because of these factors, Facebook executives don’t feel like they’re undermining the mental health of millions of teenage girls or putting a gun to the head of American democracy. Instead, they feel that they’re running a business while managing some “very challenging problems.”

We can break this down using the kinds of moral dilemmas that I and other researchers have used to study the mechanisms of moral judgment.

Imagine that your prized possession is on the trolley tracks — perhaps an antique guitar that’s been in your family for generations. You can save your guitar from the oncoming trolley by pushing someone off a footbridge and onto the tracks. Would you do it? Not unless you’re a psychopath. This is murder, and it feels like murder. It feels like murder because it’s active, direct, and fully intentional . You wouldn’t do this to save your guitar, and Mark Zuckerberg, I believe, wouldn’t do this to preserve Facebook’s profits.

Now let’s adjust the case a bit. Suppose that instead of pushing the person off the footbridge, you could hit a switch that would drop them through a trapdoor onto the tracks. It’s wrong to hit that switch. But it’s certainly easier to hit the switch than to push the person with your bare hands. It’s easier because it’s less direct, because the action does not require your “ personal force .” (And it would be even easier if you could get someone else to do the job for you .)

Now let’s make it even easier. The trolley is headed toward your guitar, but this time you don’t have to use anyone as a trolley-stopper. You can hit a switch that will turn the trolley away from the guitar and onto another track. Unfortunately, there’s a person on that track. It’s still wrong to hit the switch, but it feels a bit more defensible. You can say to yourself, “I’m not trying to kill this person. I’m just trying to save my guitar.” You’re killing the person as a side-effect. It’s “collateral damage.”

Now imagine that the switch is already thrown. To save your guitar, you don’t have to do anything. Sure, you could throw the switch back the other way, away from the person and onto your guitar. But is that your job? Is it your responsibility? Now that the harm is caused passively, it’s a lot easier to say, “Well, I feel very bad about this, but it’s not my fault. I didn’t set this runaway trolley in motion. I’m just minding my own business. I have a responsibility to protect my family’s guitar.”

And if you’re feeling somewhat guilty, you can make yourself feel better. Suppose that it’s a very heavy switch. If you pull with all your might, you can turn the trolley away from the person and onto your guitar. So, what do you do? You pull pretty hard, but not hard enough. “I’m trying here! Please understand that this is a very heavy switch !”

That’s basically what Facebook has been saying and doing. American democracy is in peril. The mental health of millions of teenagers is in peril. Facebook doesn’t want these bad things to happen, but they don’t feel compelled to do the heavy pulling that’s necessary to prevent them from happening. Facebook isn’t actively and intentionally and directly causing these problems. Instead, it’s allowing these things to happen as indirect side-effects of it running its business as profitably as possible. And, of course, Facebook is trying to help. But not hard enough. There are things Facebook is not willing to sacrifice.

The problem is that company leaders’ moral emotions — and our moral emotions — are not well-tuned to the situation. Actions that don’t feel particularly violent can do terrible damage on a massive scale. And it doesn’t seem like a terrible thing until the damage has been done. We need systems that acknowledge this and compensate for these human limitations.

“When your interests are at stake you find every reason not to sacrifice your own interests. You raise the threshold for evidence. You call for more research, pointing to all the ambiguities. And there are always ambiguities.”

How do we make this happen?

As Frances Haugen, the Facebook whistleblower, has said, we need regulation. This does two things. First, it puts the decisions in the hands of professionals who can think about the broader consequences, rather than relying on gut reactions. Second, it leaves the decisions to people who don’t have a conflict of interest — people whose job is to protect the public.

We don’t let pharmaceutical companies decide for themselves which drugs are safe because they have a conflict of interest. We don’t let transportation companies decide for themselves which vehicles are safe because they have a conflict of interest. They’re human, and even if they have good intentions, they can be unconsciously biased. When your interests are at stake you find every reason not to sacrifice your own interests. You raise the threshold for evidence. You call for more research, pointing to all the ambiguities. And there are always ambiguities.

The fundamental problem is that Facebook wants to control large swaths of the world’s information infrastructure but doesn’t want to be responsible for the world. They want to hear those coins clink every time someone hops on the information highway, but they don’t want to be responsible for highway safety. Nothing is going to change until the decisions about what’s safe are made by people whose incentives are aligned with the public good, instead of aligned with corporate profit.

As Haugen said, none of this means that Zuckerberg and other executives at Facebook are evil. They’re not trying to make the world worse. But their focus is on that precious guitar. They are very attached to their billions.

What about the argument that Facebook executives can’t control how users engage with or act on their algorithms?

You can’t control what everyone does, and you don’t need to. We need a system that prevents the worst outcomes. We need to align causal responsibility with moral responsibility. Newspapers and other publishers take responsibility for what they publish. They are held accountable. It’s hard, and it takes resources, but it can be done. Someone needs to be responsible and accountable for what appears online. There are too many people online for everyone to be closely scrutinized, but the worst offenders — the ones who have the biggest audiences and can do the most damage — can be restrained.

Many might argue regulation would severely hamper economic growth.

Economic growth and regulation are more than compatible. In the long run, they go hand in hand. Think of the FDA or the FAA. Having government agencies that are responsible for keeping medicines and airplanes safe doesn’t hobble the pharmaceutical and aviation industries. It makes those industries sustainable. If American democracy is destroyed by militant extremism, it won’t be good for Facebook either. The unregulated pursuit of profit is, in the long run, bad for profit. Sustainable capitalism means operating within constraints that safeguard the public good.

Share this article

You might like.

New IOP poll shows they still plan to show up to vote but are subject to ‘seismic mood swings’ over specific issues

Hahrie Han delivers the 2024 Tanner Lectures on Human Values.

Tanner Lectures explore models of engaged citizenry from ancient agoras to modern megachurches

facebook business ethics case study

Richard Lazarus sees conservative majority as threat to protections developed over past half century

Finding right mix on campus speech policies

Legal, political scholars discuss balancing personal safety, constitutional rights, academic freedom amid roiling protests, cultural shifts

Harvard announces return to required testing

Leading researchers cite strong evidence that testing expands opportunity

Exercise cuts heart disease risk in part by lowering stress, study finds

Benefits nearly double for people with depression

About Stanford GSB

  • The Leadership
  • Dean’s Updates
  • School News & History
  • Commencement
  • Business, Government & Society
  • Centers & Institutes
  • Center for Entrepreneurial Studies
  • Center for Social Innovation
  • Stanford Seed

About the Experience

  • Learning at Stanford GSB
  • Experiential Learning
  • Guest Speakers
  • Entrepreneurship
  • Social Innovation
  • Communication
  • Life at Stanford GSB
  • Collaborative Environment
  • Activities & Organizations
  • Student Services
  • Housing Options
  • International Students

Full-Time Degree Programs

  • Why Stanford MBA
  • Academic Experience
  • Financial Aid
  • Why Stanford MSx
  • Research Fellows Program
  • See All Programs

Non-Degree & Certificate Programs

  • Executive Education
  • Stanford Executive Program
  • Programs for Organizations
  • The Difference
  • Online Programs
  • Stanford LEAD
  • Seed Transformation Program
  • Aspire Program
  • Seed Spark Program
  • Faculty Profiles
  • Academic Areas
  • Awards & Honors
  • Conferences

Faculty Research

  • Publications
  • Working Papers
  • Case Studies

Research Hub

  • Research Labs & Initiatives
  • Business Library
  • Data, Analytics & Research Computing
  • Behavioral Lab

Research Labs

  • Cities, Housing & Society Lab
  • Golub Capital Social Impact Lab

Research Initiatives

  • Corporate Governance Research Initiative
  • Corporations and Society Initiative
  • Policy and Innovation Initiative
  • Rapid Decarbonization Initiative
  • Stanford Latino Entrepreneurship Initiative
  • Value Chain Innovation Initiative
  • Venture Capital Initiative
  • Career & Success
  • Climate & Sustainability
  • Corporate Governance
  • Culture & Society
  • Finance & Investing
  • Government & Politics
  • Leadership & Management
  • Markets & Trade
  • Operations & Logistics
  • Opportunity & Access
  • Organizational Behavior
  • Political Economy
  • Social Impact
  • Technology & AI
  • Opinion & Analysis
  • Email Newsletter

Welcome, Alumni

  • Communities
  • Digital Communities & Tools
  • Regional Chapters
  • Women’s Programs
  • Identity Chapters
  • Find Your Reunion
  • Career Resources
  • Job Search Resources
  • Career & Life Transitions
  • Programs & Services
  • Career Video Library
  • Alumni Education
  • Research Resources
  • Volunteering
  • Alumni News
  • Class Notes
  • Alumni Voices
  • Contact Alumni Relations
  • Upcoming Events

Admission Events & Information Sessions

  • MBA Program
  • MSx Program
  • PhD Program
  • Alumni Events
  • All Other Events
  • Operations, Information & Technology
  • Classical Liberalism
  • The Eddie Lunch
  • Accounting Summer Camp
  • Videos, Code & Data
  • California Econometrics Conference
  • California Quantitative Marketing PhD Conference
  • California School Conference
  • China India Insights Conference
  • Homo economicus, Evolving
  • Political Economics (2023–24)
  • Scaling Geologic Storage of CO2 (2023–24)
  • A Resilient Pacific: Building Connections, Envisioning Solutions
  • Adaptation and Innovation
  • Changing Climate
  • Civil Society
  • Climate Impact Summit
  • Climate Science
  • Corporate Carbon Disclosures
  • Earth’s Seafloor
  • Environmental Justice
  • Operations and Information Technology
  • Organizations
  • Sustainability Reporting and Control
  • Taking the Pulse of the Planet
  • Urban Infrastructure
  • Watershed Restoration
  • Junior Faculty Workshop on Financial Regulation and Banking
  • Ken Singleton Celebration
  • Marketing Camp
  • Quantitative Marketing PhD Alumni Conference
  • Presentations
  • Theory and Inference in Accounting Research
  • Stanford Closer Look Series
  • Quick Guides
  • Core Concepts
  • Journal Articles
  • Glossary of Terms
  • Faculty & Staff
  • Researchers & Students
  • Research Approach
  • Charitable Giving
  • Financial Health
  • Government Services
  • Workers & Careers
  • Short Course
  • Adaptive & Iterative Experimentation
  • Incentive Design
  • Social Sciences & Behavioral Nudges
  • Bandit Experiment Application
  • Conferences & Events
  • Get Involved
  • Reading Materials
  • Teaching & Curriculum
  • Energy Entrepreneurship
  • Faculty & Affiliates
  • SOLE Report
  • Responsible Supply Chains
  • Current Study Usage
  • Pre-Registration Information
  • Participate in a Study

Facebook: Hard Questions (A)

In April 2018, Facebook co-founder and CEO Mark Zuckerberg was called to Capitol Hill to be the star witness at congressional hearings intended to examine Facebook’s “breaches of trust” with its users and “larger questions about the fundamental relationship tech companies have with their users.” Zuckerberg admitted that his company faced “a number of important issues around privacy, safety, and democracy” but emphasized that his company was “idealistic and optimistic…focused on all the good that connecting people can bring.”

This ethics case (A) explores some of the issues Facebook has faced since 2014, the criticism it has come under, and its responses. These issues include the “emotional contagion experiment;” privacy issues; fake news; Russian interference in US elections; the “Cambridge Analytica” scandal; charges of bias through targeted ads; and accusations of liberal bias and censorship. The second part of the case (B) then describes some of Facebook’s policy responses to these issues, including tweaking the algorithm; developing and deploying new AI tools; changing its mission; and communicating with the public.

Also see:  ETH15-B: Facebook: Hard Questions (B )

Learning Objective

The case can be used for two purposes.

In classes on business ethics, the case can be used to analyze role of moral intuitions in determining how stakeholders respond to a company’s policies. It also can be used to highlight the fact that a company’s executives and employees often fail to anticipate these stakeholder reactions.

In classes on strategy beyond markets or business and society, the case can be used to analyze companies’ strategic reactions to issues like privacy, data security, fake news, and politicization of a company’s platform. These reactions can take a variety of forms, ranging from public relations and lobbying efforts to self-regulation and rethinking the company’s products and policies.

facebook business ethics case study

  • Priorities for the GSB's Future
  • See the Current DEI Report
  • Supporting Data
  • Research & Insights
  • Share Your Thoughts
  • Search Fund Primer
  • Affiliated Faculty
  • Faculty Advisors
  • Louis W. Foster Resource Center
  • Defining Social Innovation
  • Impact Compass
  • Global Health Innovation Insights
  • Faculty Affiliates
  • Student Awards & Certificates
  • Changemakers
  • Dean Jonathan Levin
  • Dean Garth Saloner
  • Dean Robert Joss
  • Dean Michael Spence
  • Dean Robert Jaedicke
  • Dean Rene McPherson
  • Dean Arjay Miller
  • Dean Ernest Arbuckle
  • Dean Jacob Hugh Jackson
  • Dean Willard Hotchkiss
  • Faculty in Memoriam
  • Stanford GSB Firsts
  • Certificate & Award Recipients
  • Teaching Approach
  • Analysis and Measurement of Impact
  • The Corporate Entrepreneur: Startup in a Grown-Up Enterprise
  • Data-Driven Impact
  • Designing Experiments for Impact
  • Digital Business Transformation
  • The Founder’s Right Hand
  • Marketing for Measurable Change
  • Product Management
  • Public Policy Lab: Financial Challenges Facing US Cities
  • Public Policy Lab: Homelessness in California
  • Lab Features
  • Curricular Integration
  • View From The Top
  • Formation of New Ventures
  • Managing Growing Enterprises
  • Startup Garage
  • Explore Beyond the Classroom
  • Stanford Venture Studio
  • Summer Program
  • Workshops & Events
  • The Five Lenses of Entrepreneurship
  • Leadership Labs
  • Executive Challenge
  • Arbuckle Leadership Fellows Program
  • Selection Process
  • Training Schedule
  • Time Commitment
  • Learning Expectations
  • Post-Training Opportunities
  • Who Should Apply
  • Introductory T-Groups
  • Leadership for Society Program
  • Certificate
  • 2023 Awardees
  • 2022 Awardees
  • 2021 Awardees
  • 2020 Awardees
  • 2019 Awardees
  • 2018 Awardees
  • Social Management Immersion Fund
  • Stanford Impact Founder Fellowships and Prizes
  • Stanford Impact Leader Prizes
  • Social Entrepreneurship
  • Stanford GSB Impact Fund
  • Economic Development
  • Energy & Environment
  • Stanford GSB Residences
  • Environmental Leadership
  • Stanford GSB Artwork
  • A Closer Look
  • California & the Bay Area
  • Voices of Stanford GSB
  • Business & Beneficial Technology
  • Business & Sustainability
  • Business & Free Markets
  • Business, Government, and Society Forum
  • Second Year
  • Global Experiences
  • JD/MBA Joint Degree
  • MA Education/MBA Joint Degree
  • MD/MBA Dual Degree
  • MPP/MBA Joint Degree
  • MS Computer Science/MBA Joint Degree
  • MS Electrical Engineering/MBA Joint Degree
  • MS Environment and Resources (E-IPER)/MBA Joint Degree
  • Academic Calendar
  • Clubs & Activities
  • LGBTQ+ Students
  • Military Veterans
  • Minorities & People of Color
  • Partners & Families
  • Students with Disabilities
  • Student Support
  • Residential Life
  • Student Voices
  • MBA Alumni Voices
  • A Week in the Life
  • Career Support
  • Employment Outcomes
  • Cost of Attendance
  • Knight-Hennessy Scholars Program
  • Yellow Ribbon Program
  • BOLD Fellows Fund
  • Application Process
  • Loan Forgiveness
  • Contact the Financial Aid Office
  • Evaluation Criteria
  • GMAT & GRE
  • English Language Proficiency
  • Personal Information, Activities & Awards
  • Professional Experience
  • Letters of Recommendation
  • Optional Short Answer Questions
  • Application Fee
  • Reapplication
  • Deferred Enrollment
  • Joint & Dual Degrees
  • Entering Class Profile
  • Event Schedule
  • Ambassadors
  • New & Noteworthy
  • Ask a Question
  • See Why Stanford MSx
  • Is MSx Right for You?
  • MSx Stories
  • Leadership Development
  • Career Advancement
  • Career Change
  • How You Will Learn
  • Admission Events
  • Personal Information
  • Information for Recommenders
  • GMAT, GRE & EA
  • English Proficiency Tests
  • After You’re Admitted
  • Daycare, Schools & Camps
  • U.S. Citizens and Permanent Residents
  • Requirements
  • Requirements: Behavioral
  • Requirements: Quantitative
  • Requirements: Macro
  • Requirements: Micro
  • Annual Evaluations
  • Field Examination
  • Research Activities
  • Research Papers
  • Dissertation
  • Oral Examination
  • Current Students
  • Education & CV
  • International Applicants
  • Statement of Purpose
  • Reapplicants
  • Application Fee Waiver
  • Deadline & Decisions
  • Job Market Candidates
  • Academic Placements
  • Stay in Touch
  • Faculty Mentors
  • Current Fellows
  • Standard Track
  • Fellowship & Benefits
  • Group Enrollment
  • Program Formats
  • Developing a Program
  • Diversity & Inclusion
  • Strategic Transformation
  • Program Experience
  • Contact Client Services
  • Campus Experience
  • Live Online Experience
  • Silicon Valley & Bay Area
  • Digital Credentials
  • Faculty Spotlights
  • Participant Spotlights
  • Eligibility
  • International Participants
  • Stanford Ignite
  • Frequently Asked Questions
  • Founding Donors
  • Location Information
  • Participant Profile
  • Network Membership
  • Program Impact
  • Collaborators
  • Entrepreneur Profiles
  • Company Spotlights
  • Seed Transformation Network
  • Responsibilities
  • Current Coaches
  • How to Apply
  • Meet the Consultants
  • Meet the Interns
  • Intern Profiles
  • Collaborate
  • Research Library
  • News & Insights
  • Program Contacts
  • Databases & Datasets
  • Research Guides
  • Consultations
  • Research Workshops
  • Career Research
  • Research Data Services
  • Course Reserves
  • Course Research Guides
  • Material Loan Periods
  • Fines & Other Charges
  • Document Delivery
  • Interlibrary Loan
  • Equipment Checkout
  • Print & Scan
  • MBA & MSx Students
  • PhD Students
  • Other Stanford Students
  • Faculty Assistants
  • Research Assistants
  • Stanford GSB Alumni
  • Telling Our Story
  • Staff Directory
  • Site Registration
  • Alumni Directory
  • Alumni Email
  • Privacy Settings & My Profile
  • Success Stories
  • The Story of Circles
  • Support Women’s Circles
  • Stanford Women on Boards Initiative
  • Alumnae Spotlights
  • Insights & Research
  • Industry & Professional
  • Entrepreneurial Commitment Group
  • Recent Alumni
  • Half-Century Club
  • Fall Reunions
  • Spring Reunions
  • MBA 25th Reunion
  • Half-Century Club Reunion
  • Faculty Lectures
  • Ernest C. Arbuckle Award
  • Alison Elliott Exceptional Achievement Award
  • ENCORE Award
  • Excellence in Leadership Award
  • John W. Gardner Volunteer Leadership Award
  • Robert K. Jaedicke Faculty Award
  • Jack McDonald Military Service Appreciation Award
  • Jerry I. Porras Latino Leadership Award
  • Tapestry Award
  • Student & Alumni Events
  • Executive Recruiters
  • Interviewing
  • Land the Perfect Job with LinkedIn
  • Negotiating
  • Elevator Pitch
  • Email Best Practices
  • Resumes & Cover Letters
  • Self-Assessment
  • Whitney Birdwell Ball
  • Margaret Brooks
  • Bryn Panee Burkhart
  • Margaret Chan
  • Ricki Frankel
  • Peter Gandolfo
  • Cindy W. Greig
  • Natalie Guillen
  • Carly Janson
  • Sloan Klein
  • Sherri Appel Lassila
  • Stuart Meyer
  • Tanisha Parrish
  • Virginia Roberson
  • Philippe Taieb
  • Michael Takagawa
  • Terra Winston
  • Johanna Wise
  • Debbie Wolter
  • Rebecca Zucker
  • Complimentary Coaching
  • Changing Careers
  • Work-Life Integration
  • Career Breaks
  • Flexible Work
  • Encore Careers
  • D&B Hoovers
  • Data Axle (ReferenceUSA)
  • EBSCO Business Source
  • Global Newsstream
  • Market Share Reporter
  • ProQuest One Business
  • Student Clubs
  • Entrepreneurial Students
  • Stanford GSB Trust
  • Alumni Community
  • How to Volunteer
  • Springboard Sessions
  • Consulting Projects
  • 2020 – 2029
  • 2010 – 2019
  • 2000 – 2009
  • 1990 – 1999
  • 1980 – 1989
  • 1970 – 1979
  • 1960 – 1969
  • 1950 – 1959
  • 1940 – 1949
  • Service Areas
  • ACT History
  • ACT Awards Celebration
  • ACT Governance Structure
  • Building Leadership for ACT
  • Individual Leadership Positions
  • Leadership Role Overview
  • Purpose of the ACT Management Board
  • Contact ACT
  • Business & Nonprofit Communities
  • Reunion Volunteers
  • Ways to Give
  • Fiscal Year Report
  • Business School Fund Leadership Council
  • Planned Giving Options
  • Planned Giving Benefits
  • Planned Gifts and Reunions
  • Legacy Partners
  • Giving News & Stories
  • Giving Deadlines
  • Development Staff
  • Submit Class Notes
  • Class Secretaries
  • Board of Directors
  • Health Care
  • Sustainability
  • Class Takeaways
  • All Else Equal: Making Better Decisions
  • If/Then: Business, Leadership, Society
  • Grit & Growth
  • Think Fast, Talk Smart
  • Spring 2022
  • Spring 2021
  • Autumn 2020
  • Summer 2020
  • Winter 2020
  • In the Media
  • For Journalists
  • DCI Fellows
  • Other Auditors
  • Academic Calendar & Deadlines
  • Course Materials
  • Entrepreneurial Resources
  • Campus Drive Grove
  • Campus Drive Lawn
  • CEMEX Auditorium
  • King Community Court
  • Seawell Family Boardroom
  • Stanford GSB Bowl
  • Stanford Investors Common
  • Town Square
  • Vidalakis Courtyard
  • Vidalakis Dining Hall
  • Catering Services
  • Policies & Guidelines
  • Reservations
  • Contact Faculty Recruiting
  • Lecturer Positions
  • Postdoctoral Positions
  • Accommodations
  • CMC-Managed Interviews
  • Recruiter-Managed Interviews
  • Virtual Interviews
  • Campus & Virtual
  • Search for Candidates
  • Think Globally
  • Recruiting Calendar
  • Recruiting Policies
  • Full-Time Employment
  • Summer Employment
  • Entrepreneurial Summer Program
  • Global Management Immersion Experience
  • Social-Purpose Summer Internships
  • Process Overview
  • Project Types
  • Client Eligibility Criteria
  • Client Screening
  • ACT Leadership
  • Social Innovation & Nonprofit Management Resources
  • Develop Your Organization’s Talent
  • Centers & Initiatives
  • Student Fellowships
  • Harvard Business School →
  • Faculty & Research →
  • August 2018 (Revised April 2023)
  • HBS Case Collection

Facebook—Can Ethics Scale in the Digital Age?

  • Format: Print
  • | Language: English
  • | Pages: 38

About The Author

facebook business ethics case study

George A. Riedel

Related work.

  • November 2019
  • Faculty Research

Facebook—Can Ethics Scale in the Digital Age? (B)

  • Facebook—Can Ethics Scale in the Digital Age? (B)  By: George A Riedel
  • Facebook—Can Ethics Scale in the Digital Age?  By: George A. Riedel and Carin-Isabel Knoop

McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

From The Blog UT Star Icon

A Whistleblower Faces Down Facebook

Whistleblower Frances Haugen’s October 5, 2021 testimony before Congress regarding her former employer Facebook’s practices was simultaneously riveting and deeply unsettling. Her overarching point was that Facebook consistently prioritizes profits over users’ safety, refusing to make product reforms that would protect users from the company’s products’ biggest harms.

Facebook has, of course, faced several scandals over the years, including those related to the Cambridge Analytica privacy scandal, the Myanmar genocide, and Russian election interference. Haugen drew parallels between Facebook’s actions and those of companies selling tobacco and opioids. To illustrate her point, she highlighted, among many other practices, Facebook’s:

  • Use of “engagement-based ranking” that prioritizes extreme sentiments and dangerous content and therefore has impacts such as causing teen girls to be exposed to more anorexia content, fuels political divisions within families, and contributes to ethnic violence in Ethiopia.
  • Serving as a platform that helped Stop the Steal groups organize the January 6 insurrection
  • Decision to disband its civic integrity team after the 2020 election and before the January 6 attack on the Capitol
  • Failure to stop the spread of misleading vaccine information
  • Burying its own internal research regarding the impact of Instagram on teen girls’ mental health
  • Failure to protect the most vulnerable populations (e.g., widows and people who’ve moved to new cities) from misinformation that sucked them down rabbit holes
  • Knowingly allowing authoritarian or terrorist-based leaders to use its platform to surveil enemies
  • Allowing high-profile users to skirt its content rules.

We are pretty sure that the Big Cheeses at Facebook, including Mark Zuckerberg himself, are generally speaking good people and certainly think of themselves as such. If we assume there is any substantial core of truth to Ms. Haugen’s testimony, then we face one of the most common questions in behavioral ethics research—why do good people do bad things? Most people thought well of Enron before its collapse in scandal. People today are probably more dubious of Facebook, but both firms likely fell prey to the same sorts of psychological flaws in moral decision-making.

This is speculative, of course, but we think that overconfidence may have had something to do with it. After all, Facebook was the coolest kid on the block for such a long time. It has been widely admired, its products broadly enjoyed, and its commercial success is nearly unrivaled. It has 2.9 billion monthly users and enjoyed $54 billion in advertising revenue in just the first half of 2021. Zuckerberg’s overconfidence has been noted by his former close mentor, Roger McNamee, among others. The overconfidence that people, especially successful people, tend to have regarding their various abilities (driving, speaking, managing, etc.) often spill over into matters of morals. Impossibly high percentages of people believe they are more moral than most of their colleagues and competitors and over 90% of us are satisfied with our moral character. The head honchos at Facebook may be overconfident regarding their morals and likely have instituted policies and practices without carefully thinking through their moral implications.

The self-serving bias is, of course, the tendency that people have to gather, process, and even remember information in ways that serve their own perceived self-interest. If you happened to own a big chunk of Facebook shares, as most of the company’s “deciders” no doubt do, decisions that help the firm’s profitability and stock price performance might well seem to be the right thing to do even though they’re just the right thing for them.

Framing is an important concept because the decisions that people make have everything to do with what is in their frame of reference at the time they make the decision. If profitability dominates your frame of reference, ethical issues may, through a process called ethical fading , nearly disappear from view. Ms. Haugen’s observation that Facebook consistently prioritizes profits above users’ safety is evidence that Facebook executives may have a framing problem. Haugen also noted Facebook’s “deep focus on scale,” which leads it to rely on artificial intelligence rather than more effective human employees in judging which offensive content should be blocked—another framing problem.

Haugen pronounced that it was time for Facebook to declare “moral bankruptcy.” One suspects that Ms. Haugen’s testimony will spark continued and intense scrutiny of Facebook that will exceed anything the company has seen before. If the leaders of Facebook wish to solve the problems and mitigate the damage that have prompted that scrutiny, they should engage in a little self-examination using behavioral ethics principles as a guide.

Robin Givhan, “The Whistleblower Came to Advocate for Humans over Algorithms,” Washington Post , Oct. 5, 2021.

Kevin Granville, “Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens,” New York Times , Mar. 19, 2018.

Mike Isaac & Daisuke Wakabayashi, “Russian Influence Reached 126 Million Through Facebook Alone,” New York Times , Oct. 30, 2017.

Adrienne LaFrance, “The Largest Autocracy on Earth,” The Atlantic , Sept. 27, 2021.

Ulrike Malmendier & Geoffrey Tate, “Behavioral CEOs: The Role of Managerial Overconfidence,” Journal of Economic Perspectives 29(4): 37 (2015).

Roger McNamee, Zucked: Waking Up to the Facebook Catastrophe (2019).

Mariella Moon, “Mark Zuckerberg Denies Facebook Puts Profit Over Users’ Safety,” endgadget , Oct. 6, 2021, at https://www.engadget.com/mark-zuckerberg-denies-facebook-profit-over-safety-033717690.html .

Alexandra Stevenson, “Facebook Admits It Was Used to Incite Violence in Myanmar,” New York Times , Nov. 6, 2018.

Behavioral Ethics:  https://ethicsunwrapped.utexas.edu/glossary/behavioral-ethics

Ethical Fading: https://ethicsunwrapped.utexas.edu/video/ethical-fading

Framing: https://ethicsunwrapped.utexas.edu/video/framing

Overconfidence Bias: https://ethicsunwrapped.utexas.edu/video/overconfidence-bias

Self-Serving Bias: https://ethicsunwrapped.utexas.edu/video/self-serving-bias

Related Posts

Stopping COVID-19: A Behavioral Ethics Guide

Stopping COVID-19: A Behavioral Ethics Guide

Engineering Ethics and the Boeing Scandal

Engineering Ethics and the Boeing Scandal

Leading with Values

Leading with Values

Stay informed, support our work.

Brought to you by:

Harvard Business School

Facebook-Can Ethics Scale in the Digital Age?

By: George A. Riedel, Carin-Isabel Knoop

Since its founding in 2004, Facebook has built a phenomenally successful business at global scale to become the fifth most valuable public company in the world. The revelation of Cambridge Analytica…

  • Length: 38 page(s)
  • Publication Date: Aug 10, 2018
  • Discipline: Business Ethics
  • Product #: 319030-PDF-ENG

What's included:

  • Teaching Note
  • Educator Copy

$4.95 per student

degree granting course

$8.95 per student

non-degree granting course

Get access to this material, plus much more with a free Educator Account:

  • Access to world-famous HBS cases
  • Up to 60% off materials for your students
  • Resources for teaching online
  • Tips and reviews from other Educators

Already registered? Sign in

  • Student Registration
  • Non-Academic Registration
  • Included Materials

Since its founding in 2004, Facebook has built a phenomenally successful business at global scale to become the fifth most valuable public company in the world. The revelation of Cambridge Analytica events in March 2018, where 78 million users' information was leaked in a 2016 U.S. election cycle, exposed a breach of trust/privacy among its user community. In the past, growth at any costs appeared to be the de facto strategy. Now many voices such as regulators, advertisers, ethicists, shareholders and users argued for a more responsible approach to addressing their concerns. Mark Zuckerberg (CEO/Chair/Founder) and Sheryl Sandberg (COO) mapped out their six-point plan to address this existential threat. Could they continue to grow and rectify the breach of trust/privacy? Did other stakeholders have some greater responsibility too? In addition to issues of privacy and trust, there is a growing chorus of concern about "content moderation"-not for the easy topics like spam or copyright material-but for the hard things revolving around political points of view, hate speech, polarizing perspectives, etc. How will Facebook strike the balance between free speech and corrosive content across billions of users and dozens of languages? Are they the arbiters of truth/censorship in the digital world?

Learning Objectives

This case will be used in LCA (Leadership and Corporate Accountability) in the Customer Module. The objective of the case is to get students to focus on the balance of responsibilities to customers, users and other stakeholders in a business that has grown to a global scale. Is a user a customer? What responsibilities does Facebook have with regards to the platform and ecosystem they've built? The case touches on legal, ethical and economic issues associated with running the world's largest social media platform and the growing concern that users' trust in the platform had eroded.

Aug 10, 2018 (Revised: Apr 11, 2023)

Discipline:

Business Ethics

Harvard Business School

319030-PDF-ENG

We use cookies to understand how you use our site and to improve your experience, including personalizing content. Learn More . By continuing to use our site, you accept our use of cookies and revised Privacy Policy .

facebook business ethics case study

UNSW Logo

  • Follow UNSW on LinkedIn
  • Follow UNSW on Instagram
  • Follow UNSW on Facebook
  • Follow UNSW on WeChat
  • Follow UNSW on TikTok

Facebook takes its ethics into the metaverse - and critics are worried

Facebook is under fire (again) for its questionable business ethics – what exactly did they do wrong and what will happen as a result?

Facebook has faced criticism for not acting on its research which found that Instagram increases poor self-image and mental health in teenage girls. Picture: Shutterstock

Media contact

Kate Bettes UNSW Business School +61407701034 [email protected]

In September 2021,  The Wall Street Journal  published a series of damning articles on Facebook. Based on internal documents, several ethically questionable practices within the technology company were highlighted. 

Later revealed to have been leaked by whistleblower Frances Haugen, a product manager in Facebook’s civic integrity team, the documents included revelations that Facebook’s own research showed Instagram (which it owns)  exacerbates poor self-image and mental health in teenage girls,  as well as the existence of ‘VIP users’ that are  exempted from certain platform rules . 

This is just the latest big scandal to hit the tech company, which has also been under scrutiny for its poor handling of user data in the  Cambridge Analytica scandal  (2018), accusations of inciting  genocide in Myanmar  (2018), and spreading misinformation and ‘fake news’ during the 2016 US Presidential Election  (2016), as well as causing consumer anger over their  ‘mood’ experiment analysis and manipulation  (2014). 

facebook business ethics case study

In the recent past, Facebook has been accused of spreading fake news during the 2016 US presidential election. Photo: Shutterstock

So, what does this latest ethical controversy mean for the Silicon Valley monolith, and will it affect how Facebook - and other big tech companies - conduct themselves?

See also:   Should you know (or care) how your data is being used before you consent?  

Why are consumers and policymakers upset with Facebook?  

According to  Rob Nicholls , Associate Professor in Regulation and Governance at the  UNSW Business School,  one of the reasons policymakers and consumers alike are riled up at Facebook’s behaviour is because of what is perceived to be a continued lack of responsiveness to regulatory intervention. Combined with the separate issue – that they were sitting on a trove of information that showed Facebook knew it was causing harm and chose not to act – makes for a poor look. 

“They didn’t use the information they had to change the approach,” he says. “You’ve got something that looks not dissimilar to big tobacco. Yes, there are harm issues, but no, we’re not going to talk about it.” 

Finding this was hidden is particularly alarming to regulators because it brings up the question of what else don’t we know, A/Prof. Nicholls says. There is then a ‘piling on effect’ by other concerned parties. 

“In Australia, the ACCC’s chairman, Rod Sims saying, ‘ Well why isn’t Facebook negotiating with SBS under the news media bargaining code? ‘ All of these things tend to compound when the company is front and centre in the news.” 

See also:   AI: friend or foe? (and what business leaders need to know)

We are now more aware of technology shortfalls and dangers  

A/Prof. Nicholls says there is now more awareness by consumers and policymakers about the shortcomings of Facebook and other big tech companies, with the COVID-19 pandemic leading to a stronger realisation about how much we rely on Facebook’s platforms (which include Instagram and WhatsApp). 

He also points out that in Australia, Facebook’s taking down of Australia-based pages during debate over the  News Media Bargaining Code  into parliament has also drawn attention to Facebook’s lack of competition and excess of control in the space. 

“If Facebook can take down our Health Service website, which we’re relying on to get information on the pandemic or could cut off 1800 Respect because Facebook thinks it’s a news media business ... Suddenly there’s that realisation of how ingrained social media companies are.” 

Facebook scroll.

A/Prof. Nicholls says consumers and policymakers have become more aware of how much we rely on Facebook and its platforms Photo: Unsplash / Joshua Hoehne

“Ten years ago, [the leadership mantra of] Facebook was ‘Move fast and break things!’ That’s great. You’re a small start-up,” says A/Prof. Nicholls. “But Facebook today, your revenue is $US85 billion and you’re part of the day-to-day life of the vast majority of people. You actually have to take some responsibility.” 

See also:  AI: What are the present and future opportunities?

Meta-bad timing: get your house in order first  

To add fuel to the fire of public debate, Facebook announced a rebranding of its parent company from ‘Facebook’ to ‘Meta’. Mark Zuckerberg’s company would now shift its focus to working on creating a ‘metaverse’ – a 3D space where people can interact in immersive online environments. 

While hiring thousands to create a metaverse might have been an exciting announcement at other times for the company formerly known as Facebook, the public and lawmakers were quick to mock and question the move as one that swept systemic issues under the carpet. 

Meta as in “we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society… for profit!” https://t.co/jzOcCFaWkJ — Alexandria Ocasio-Cortez (@AOC) October 28, 2021
Instead of Facebook changing its policies to protect children and our democracy it has chosen to simply change its name to Meta. Different name, same threat to our nation. — (((DeanObeidallah))) (@DeanObeidallah) October 28, 2021

“It should have sounded like a really great success story: Facebook is going to invest in 10,000 new tech jobs in Europe,” says A/Prof. Nicholls. 

“But instead, the answer has been, ‘Just a minute, if you’re prepared to spend that much money, why not fix the problems in the services that you currently provide, with far fewer jobs?’” 

He also points out how the identified issue of body image might in fact be amplified in the metaverse. 

“Are you now going to create a metaverse where, ‘We’re going to deal with body shape because in your virtual reality, you’ll look perfect’? 

“That is going to cause distress in itself.” 

See also:   Instagram can make teens feel bad about their body, but parents can help. Here's how

Will the debate affect regulation around tech companies?  

According to A/Prof. Nicholls, we could be seeing a moment where agencies and government come together to protect consumers.  

“It’s possible, but it takes a bit of political will,” he says. “In the past, Facebook, Google, and to a lesser extent, Amazon, Microsoft, and Apple have each been able to delay and deflect, partly because it takes a big political decision.” 

Instagram and Facebook on phone

Moves to curtail the power of Facebook and its platforms might be politically popular. Photo: Unsplash / Brett Jordan

A/Prof. Nicholls says the difference now is that there is the realisation that a political decision on this is going to be popular, making it far more likely that it will be taken. But he also points out it’s not likely to be a ‘break them up’ kind of solution. 

“Part of the reason that Facebook does touch so many of us on such a regular basis is what they offer is really useful,” he says. “The issue that flows from that is, how do you make sure that the business operates effectively without stifling innovation in that area?” 

See also:   How to avoid the ethical pitfalls of artificial intelligence and machine learning

How can other AI-based companies avoid this situation?  

While A/Prof. Nicholls does not expect to see policy changes from big tech companies (“because policy change is an admission”), he does expect that we will see some practical changes by other companies that consider the issues faced by Facebook. 

“Ultimately, if you do some research and you find really bad outcomes, if you act on those, then you’re not going to have a problem a little bit later of a whistleblower pointing out that you’ve suppressed that research,” he says, referring to Haugen. 

There is a simple way to avoid this issue though, A/Prof. Nicholls points out. By acting ethically as a business, one can avoid these problems and achieve good business outcomes without having to change too much. And for businesses that are built around algorithms, this means ensuring you’ve embedded ethical approaches throughout the AI design. 

“Ethical behaviour can be built into the design of AI. Some of that actually means that you end up with better outcomes from your AI because you ensure that you actually think about it first.” 

“It really doesn’t matter whether you’re a small start-up, doing analysis of big data, or you’re a very big platform-based company. Actually, thinking about those design processes is really important.” 

See also:   Can AI replace a judge in the courtroom?

See also: Listen - The Business of AI podcast

Disclaimer: Associate Professor Rob Nicholls and his team have received funding from Facebook for his research into models, people, and their interaction with Facebook.  

  • Infrastructure & Standards
  • Information & Data
  • Intellectual Property Rights
  • Privacy & Security

What if Facebook goes down? Ethical and legal considerations for the demise of big tech

Introduction.

Facebook 1 has, in large parts of the world, become the de facto online platform for communication and social interaction. In 2017, the main platform reached the milestone of two billion monthly active users (Facebook, 2017), and global user growth since then has continued, reaching 2.6 billion in April 2020 (Facebook, 2020). Moreover, in many countries Facebook has become an essential infrastructure for maintaining social relations (Fife et al., 2013), commerce (Aguilar, 2015) and political organisation (Howard and Hussain, 2013). However, recent changes in Facebook’s regulatory and user landscape stand to challenge its pre-eminent position, making its future demise if not plausible, then at least less implausible over the long-term.

Indeed, the closure of an online social network would not in itself be unprecedented. Over the last two decades, we have seen a number of social networks come and go — including Friendster, Yik Yak and, more recently, Google+ and Yahoo Groups. Others, such as MySpace, continue to languish in a state of decline. Although Facebook is arguably more resilient to the kind of user flight that brought down Friendster (Garcia et al., 2013; Seki and Nakamura, 2016; York and Turcotte, 2015) and MySpace (boyd, 2013), it is not immune to it. These precedents are important for understanding Facebook’s possible decline. Critically, they demonstrate that the closure of Facebook’s main platform does not depend on the exit of all users; Friendster, Google+ and others continued to have users when they were sold or shut down.

Furthermore, as we examine below, any user flight that precedes Facebook’s closure would probably be geographically asymmetrical, meaning that the platform remains a critical infrastructure in some (less profitable) regions, whilst becoming less critical in others. For example, whilst Friendster started to lose users rapidly in North America, its user numbers were simultaneously growing, exponentially, in South East Asia. It was eventually sold to a Filipino internet company and remained active as a popular social networking and gaming platform until 2015. 2 The closure of Yahoo! GeoCities, the web hosting service, was similarly asymmetrical: although most sites were closed in 2009, the Japanese site (which was managed by a separate subsidiary) remained open until 2019. 3 It is also important to note that, in several of these cases, a key reason for user flight was the greater popularity of another social network platform: namely, MySpace (Piskorski and Knoop, 2006) and Facebook (Torkjazi et al., 2009). Young, white demographics, in particular, fled MySpace to join Facebook (boyd, 2013).

These precedents suggest that changing user demographics and preferences, and competition from other social networks such as Snapchat or a new platform (discussed further below) could be key drivers of Facebook’s decline. However, given Facebook’s pre-eminence as the world’s largest social networking platform, the ethical, legal and social repercussions of its closure would have far graver consequences than these precedents. Rather, the demise of a global online communication platform such as Facebook could have catastrophic social and economic consequences for innumerable communities that rely on the platform on a daily basis (Kovach, 2018), as well as the users whose personal data Facebook collects and stores. 

Despite the high stakes involved in Facebook’s demise, there is little research or public discourse addressing the legal and ethical consequences of such a scenario. The aim of this article is therefore to foster dialogue on the subject. Pursuing this goal, the article provides an overview of the main ethical and legal concerns that would arise from Facebook’s demise and sets out an agenda for future research in this area. First, we identify the headwinds buffeting Facebook, and outline the most plausible scenarios in which the company — specifically, its main platform — might close down. Second, we identify four key ethical stakeholders in Facebook’s demise based on the types of harm to which they are susceptible. We further examine how various scenarios might lead to these harms, and whether existing legal frameworks are adequate to mitigate them. Finally, we provide a set of recommendations for future research and policy intervention.

It should be noted that the legal and ethical considerations discussed in this article are by no means limited to the demise of Facebook, social media, or even “Big Tech”. In particular, to the extent that most sectors in today’s economy are already, or will soon become, data-driven and data-rich, these considerations, many of which relate to the handling of Facebook’s user data, are ultimately relevant to the failure or closure of any company handling large volumes of personal data. Likewise, as human interaction becomes increasingly mediated by social networks and Big Tech platforms, the legal and ethical considerations that we address are also relevant to the potential demise of other social networks, such as Google or Twitter. However, focusing on the demise of Facebook — one of the most data rich, social networks in today’s economy — offers a fertile case study for the analysis of these critical legal and ethical questions.

Why and how could Facebook close down?

This article necessarily adopts a long-term perspective, responding to issues that could significantly harm society in the long run if we do not begin to address them today. As outlined in the introduction, Facebook is currently in robust health: aggregate user growth on the main platform is increasing, and it continues to be highly profitable, with annual revenue and income increasing year-over-year (Facebook, 2017; 2018). As such, it is unlikely that Facebook would shut down anytime soon. However, as anticipated, the rapidly changing socio-economic and regulatory landscape in which Facebook operates could lead to a reversal in its priorities and fortunes over the long term.

Facebook faces two major headwinds. First, the platform is coming under increasing pressure from regulators across the world (Gorwa, 2019). In particular, tighter data privacy regulation in various jurisdictions (notably, the EU General Data Protection Regulation [GDPR] 4 and the California Consumer Privacy Act [CCPA]) 5 could severely inhibit the company’s ability to collect and analyse user data. This in turn could significantly reduce the value of the Facebook platform to advertisers, who are drawn to its granular, data-driven insights about user behaviour and thus higher ad-to-sales conversion rates through targeted advertising. In turn, this would undermine Facebook’s existing business model, whereby advertising generates over 98.5% of Facebook’s revenue (Facebook, 2018), the vast majority of which on its main platform. More boldly, regulators in several countries are attempting to break up the company on antitrust grounds (Facebook, 2020, p. 64), which could lead, inter alia , to the reversal of its acquisitions of Instagram and WhatsApp — key assets, the loss of which could adversely affect Facebook’s future growth prospects.

Secondly, the longevity of the main Facebook platform is under threat from shifting social and social media trends. Regarding the latter, social media usage is gradually moving away from public, web-based platforms in favour of mobile-based messaging apps, particularly within younger demographics. Indeed, in more saturated markets, such as the US and Canada, Facebook’s penetration rate has declined (Facebook, 2020, pp. 31-33), particularly amongst teenagers who tend to favour mobile-only apps such as Snapchat, Instagram and TikTok (Piper Jaffray, 2020). Although Facebook and Instagram still have the largest share of the market in terms of time spent on social media, this has declined since 2015 in favour of Snapchat (Furman, 2019, p. 26). They also face growing competition from international players such as WeChat with over 1 billion users (Tencent, 2019), as well as social media apps with strong political leanings, such as Parler, which are growing in popularity. 6

A sustained movement of active users away from the main Facebook platform would inevitably impact the preferences of advertisers, who rely on active users to generate engagement for their clients. More broadly, Facebook’s business model is under threat from a growing social and political movement against the company’s perceived failure to remove misinformation and hateful content from its platform. The advertiser boycott in the wake of the Black Lives Matter protests highlights the commercial risks to Facebook of failing to respond adequately to the social justice concerns of its users and customers. 7 As we have seen in the context of both Facebook as well as precedents such as Friendster, due to reverse network effects, any such exodus of users and/or advertisers can occur suddenly and escalate rapidly (Garcia et al., 2013; Seki and Nakamura, 2016; Cannarella and Spechler, 2014).

Collectively, these socio-technical and regulatory developments may force Facebook to shift its strategic priorities away from being a public networking platform (and monetising user data through advertising on the platform), to a company focused on private, ephemeral messaging, monetised through commerce and payment transactions. Indeed, recent statements from Facebook point in this direction:

I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won't stick around forever. This is the future I hope we will help bring about. We plan to build this the way we've developed WhatsApp: focus on the most fundamental and private use case -- messaging -- make it as secure as possible, and then build more ways for people to interact on top of that.(Zuckerberg, 2019)

Of course, it does not automatically follow that Facebook would shut down its main platform, particularly if it still has sufficient active users remaining on it, and it bears little cost from keeping it open. On the other hand, closure becomes more likely once a sufficient number of active users and advertisers (but, importantly, not necessarily all) have also left the platform, especially in its most profitable regions. In this latter scenario, it is conceivable that Facebook would consider shutting down the main platform’s developer API (Application Programming Interface — the interface between Facebook and client software) instead of leaving it open and vulnerable to a security breach. Indeed, it was in similar circumstances that Google recently closed the consumer version of its social network Google+ (Thacker, 2018). 

In a more extreme scenario, Facebook Inc. could fail altogether and enter into a legal process such as corporate bankruptcy (insolvency): either a reorganisation that seeks to rescue the company as a going concern, typically by restructuring and selling off some of its assets; or liquidation, in which the company is wound down and dissolved entirely. Such a scenario, however, should be regarded as highly unlikely for the foreseeable future. Although we highlight some of the legal and ethical considerations arising from a Facebook insolvency scenario, the non-insolvent discontinuation or closure of the main platform shall be our main focus henceforth. It should be noted that, as a technical matter, this closure could take various forms. For example, Facebook could close the platform but preserve users’ profiles; alternatively, it could close the platform and destroy, or sell parts or all of its user data etc. Whilst our focus is on the ethical and legal consequences of Facebook’s closure at the aggregate level, we address technical variations in the specific form that this closure could take to the extent that it impacts upon our analysis. 

Key ethical stakeholders and potential harms

In this section, we identify four key ethical stakeholders who could be harmed 8 by Facebook’s closure. These stakeholders are: dependent communities, in particular the socio-economic and media ecosystems that depend on Facebook to flourish; existing users , (active and passive) individuals, as well as groups, whose data are collected, analysed and monetised by Facebook, and stored on the company’s servers; non-users , particularly deceased users whose data continues to be stored and used by Facebook, and who will represent hundreds of millions of Facebook profiles in only a few decades; and future generations, who may have a scientific interest in the Facebook archive as a historical resource and cultural heritage.

We refer to these categories as ethical stakeholders, rather than user types, because our categorisation is based on the unique types of harm that each would face in a Facebook closure, not their way of using the platform. That is, the categorisation is a tool to conduct our ethical analysis, rather than corresponding to some already existing groups of users. A single individual may for instance have mutually conflicting interests in her capacity as an existing Facebook user, a member of a dependent community, and as a future non-user. Thus, treating her as a single unit, or part of a particular user group, would reduce the ethical complexity of the analysis. As such, the interests of the stakeholders are by no means entirely compatible with one another, and there will unquestionably be conflicts of interest between them.

Furthermore, for the purposes of the present discussion, we do not intend to rank the relative value of the various interests; there is no internal priority to our analysis, although this may become an important question for future research. We also stress that our list is by no means exhaustive. Our focus is on the most significant ethical stakeholders who have an interest in Facebook’s closure and would experience unique harms due to the closure of a company that is both a global repository of personal data, and the world’s main communication and social networking infrastructure. As such, we exclude traditional, economic stakeholders from the analysis — such as employees, directors, shareholders and creditors. While these groups certainly have stakes in Facebook’s potential closure, there is nothing that significantly distinguishes their interests in the closure of a company like Facebook from the closure of any other (multinational) corporation. This also means that we exclude stakeholders that could benefit from Facebook’s closure, such as commercial competitors, or governments struggling with Facebook’s influence on elections and other democratic processes. Likewise, we refrain from assessing the relative overall (un)desirability of Facebook’s closure.

Dependent communities

The first key ethical stakeholders are the ‘dependent communities’, that is, communities and industries that have developed around the Facebook platform and now (semi-)depend on its existence to flourish. 9

Over the last decade, Facebook has become a critical economic engine and a key gateway to the internet as such (Digital Competition Expert Panel, 2019). The growing industry of digitally native content providers, from major news outlets such as Huffington Post and Buzzfeed, to small independent agencies, is sometimes entirely dependent on exposure through Facebook. For example, the most recent change in Facebook’s News Feed algorithm had devastating consequences for this part of the media industry — some news outlets allegedly lost over 50% of their traffic overnight (Nicholls et al., 2018, p. 15). If such a small change in its algorithms could lead to the economic disruption of an entire industry, the wholesale closure of the main Facebook platform would likely cause significant economic and societal damage on a global scale, particularly where it occurs rapidly and/or unexpectedly, such that news outlets and other dependent communities do not have sufficient time to migrate to other web platforms.

To be clear, our main concern here is not with the individual media outlets, but with communities that are dependent on a functioning Facebook-based media ecosystem. While the sudden closure of one, or even several media outlets may not pose a threat to this ecosystem, a sudden breakdown of the entire ecosystem would have severe consequences. For instance, many of the content providers reliant on exposure through Facebook are located in developing countries, in which Facebook has become almost synonymous with the internet, acting as the primary source of news (Mirani, 2015), amongst other functions. Given the primacy of the internet to public discourse in today’s world, it goes without saying that, for these communities, Facebook effectively is the digital public sphere, and hence a central part of the public sphere overall. A notable example is Laos, a country which has so recently been digitised, that its language (Lao) has not yet been properly indexed by Google (Kittikhoun, 2019). This lacuna is filled by Facebook, which has established itself not only as the main messaging service and social network in Laos, but effectively also as the web as such. 

The launch of Facebook’s Free Basics platform, which provides free access to Facebook services in less developed countries, has further increased the number of communities that depend solely on Facebook. According to the Free Basics website, 10 100 million people who would not otherwise have been connected are now using the services offered by the platform. As such, there are many areas and communities that now depend on Facebook in order to function and are thus susceptible to considerable harm were the platform to shut down. Note that this harm is not reducible to the individuals using free basics, but is a concern for the entire community, including members not using Facebook. As an illustrative example, consider the vital role played by Facebook and other social media platforms in disseminating information about and keeping many communities connected during the COVID-19 pandemic. In a time of crisis, communities with a large dependency on a single platform become particularly vulnerable.

Of course, whether the closure of Facebook’s main platform harms these communities depends on the reasons for closure and the manner in which it closes down (sudden death vs slow decline). If closure is accompanied by the voluntary exodus of these communities, for example to a different part of the Facebook Inc. group (e.g., Messenger or Instagram), or a third-party social network, they would arguably incur limited social or economic costs. Furthermore, it is entirely possible to imagine a scenario in which the main Facebook platform is shut down because it is unprofitable to the company as a whole, or does not align with the company’s strategic priorities, yet remains systemically important for a number of dependent communities. These communities could still use and depend on the platform however may simply not be valuable or lucrative enough for Facebook Inc. to justify keeping the platform open. Indeed, many of the dependent communities that we have described are located in regions of the world that are the least profitable for the company (certainly under an advertising-driven revenue model).

The question arises how these dependent communities should be protected in the event of Facebook’s demise. Indeed, existing legal frameworks governing Facebook do not make special provision for its systemically important functions. As such, we propose that a new concept of ‘systemically important technological institutions’ (‘SITIs’) — drawing on the concept of ‘systemically important financial institutions’ (‘SIFIs’) — be given more serious consideration in managing the life and death of global communications platforms, such as Facebook, that provide a critical societal infrastructure. This proposal is examined further in the second part of this article.

Existing users

‘Existing users’ refers broadly to any living person or group of people who uses or has used the main Facebook platform, and continues to maintain a Facebook profile or page. That is, both daily and monthly active users, as well as users who are not actively using the platform however still have a profile where their information is stored (including ‘de-activated’ profiles). Invariably, there is an overlap between this set of stakeholders and ‘dependent communities’: the latter includes the former. Our main focus here is on ethical harms that arise at the level of the individual user, by virtue of their individual profiles or group pages, rather than the systemic and societal harms outlined above. 

It is tempting to think that the harm to these users in the event of Facebook’s closure is limited to the loss of the value that they place on having access to Facebook’s services. However, this would be an incomplete conclusion. Everything a user does on the network is recorded and becomes part of Facebook’s data archive, which is where the true potential for harm lies. That is, the danger stems not only from losing access to the Facebook platform and the various services it offers, but from future harms that users (active and passive) are exposed to as they lose control over their personal data. Any violation of the trust that these users place in Facebook with respect to the use of their personal data threatens to compromise user privacy, dignity and self-identity (Floridi, 2011). Naturally, these threats also exist today. However, as long as the platform remains operational, users have a clear idea of who they can hold accountable for the processing of their data. Should the platform be forced to close, or worse still, sell off user data to a third party, this accountability will likely vanish.

The scope for harm to existing users upon Facebook’s closure depends on how Facebook continues to process user data. If the data are deleted (as occurred, for example, in the closure of Yahoo! Groups), 11 users could lose access to information — particularly, photos and conversations — that are part of their identity, personal history and memory. Although Facebook does allow users to download much of their intentionally provided data to a hard drive — in the EU, implementing the right to data portability 12 — this does not encompass users’ conversations and other forms of interactive data. For example, Facebook photos in which a user has been tagged, but which were uploaded by another user, are not portable, even though these photos arguably contain the first user’s personal data. Downloading data is also an impractical option for the hundreds of millions of users accessing the platform only via mobile devices (Datareportal, 2019) that lack adequate storage and processing capacity. Personal archiving is an increasingly constitutive part of a person’s sense of self, but, as noted by Acker and Brubaker (2014), there is a tension between how users conceive of their online personal archives, and the corporate, institutional reality of these archives.

On the other hand, it is highly plausible that Facebook would instead want to retain these data to train its machine learning models and to provide insights on users of other Facebook products, such as Instagram and Messenger. In this scenario, the risk to existing users is that they lose control over how their information is used, or at least fail to understand how and where it is being processed (especially where these users are not active on other Facebook products, such as Instagram). Naturally, involuntary user profiling is a major concern with Facebook as it stands. The difference in the case of closure is that many users will likely not even be aware of the possibility of being profiled. If Facebook goes down, these users would no longer be able to view their data, leading many to believe that it in fact is destroyed. Yet, a hypothetical user may for instance create an Instagram profile in 2030 and still be profiled by her lingering Facebook data, despite Facebook (the main platform) being long gone by then. Or worse still, her old Facebook data may be used to profile other users who are demographically similar to her, without her (let alone their) informed consent or knowledge.

Existing laws in the EU offer limited protection for users’ data in these scenarios. If Facebook intended to delete the data, under EU data protection law it would likely need to notify as well as seek the consent of users for the further processing of their data, 13 offering them the opportunity to retrieve their data before deletion (see the closure of Google+ 14 and Yahoo! Groups). On the other hand, if Facebook opted to retain and continue processing user data in order to provide the (other) services set out under its terms and conditions, it is unlikely that it would be legally required to obtain fresh consent from users — although, in reality, the company would likely still offer users the option to retrieve their data. Independently, users in the EU could also exercise their rights to data portability and erasure 15 to retrieve or delete their data.

In practice, however, the enforcement and realisation of these rights is challenging. Given that user data are commingled across the Facebook group of companies, and moreover have ‘velocity’ — an individual user’s data will likely have been repurposed and reused multiple times, together with the data of other users — it is unlikely that all of the data relating to an individual user can or will be identified and permanently ‘returned’. Likewise, given that user data are commingled, objection by an individual user to the transfer of their data is unlikely to be effective — their data will still be transferred with the data of other users who consent to the transfer. As previously mentioned, the data portability function currently offered by Facebook is also limited in scope.

Notwithstanding these practical challenges, a broader problem with the existing legal framework governing user data is that it is almost entirely focused on the rights of individual users. It offers little recognition or protection for the right of groups — for example, Facebook groups formed around sports, travel, music or other shared interests — and thus limited protection against group-level ethical harm within the Facebook platform (i.e., when the ethical patient is a multi-agent-system, not necessarily reducible to its individual parts [Floridi, 2012; Simon, 1995]).

This problem is further exacerbated by so called ‘ad hoc groups’ (i.e., groups that are formed only algorithmically [Mittelstadt, 2017]), which may not necessarily correspond to any organic communities. For example, ‘dog owners living in Wales aged 38–40 that exercise regularly’ (Mittelstadt 2017, p. 477) is a hypothetical, algorithmically formed group. Whereas many organically formed groups are already acknowledged by privacy and discrimination laws, or at least have the organisational means to defend their interests (e.g., people with a certain disability, sexual orientation etc.), ad hoc algorithmic groups often lack organisational means of resistance.

The third key ethical stakeholders are those who never, or no longer, use Facebook, yet are still susceptible to harms resulting from its demise. This category includes a range of disparate sub-groups, including individuals who do not have an account, but whose data Facebook nevertheless collects and tracks from apps or websites that embed its services (Hern, 2018). Facebook uses these data, inter alia , to target the individual with ads encouraging them to join the platform (Baser, 2018). Similarly, the non-user category includes individuals who may be tracked by proxy, for example by analysing data from their relatives or close network (more on this below). A third sub-group is minors who may feature in photos and other types of data uploaded to Facebook by their parents (so-called “sharenting”).

The most significant type of non-users, however, are deceased users, i.e., those who have used the platform in the past but have since passed away. Although this may currently seem a rather niche concern, the deceased user group is expected to grow rapidly over the next couple of decades. As shown by Öhman and Watson (2019), Facebook will soon host hundreds of millions of deceased profiles on their servers. 16 This sub-group is of special interest since, unlike living non-users who generally enjoy at least some legal rights to privacy and data protection (as outlined above), the deceased do not qualify for protection under existing data protection laws. 17 The lack of protection for deceased data subjects is a pressing concern even without Facebook closing. 18 Facebook does not have any legal obligation to seek their consent (nor that of their representatives) before deleting, or otherwise further processing, users’ data after death (although Denmark, Spain and Italy are exceptions). 19 Moreover, even if Facebook tried to seek the consent of their representatives, it would have a difficult time given that users do not always appoint a ‘legacy contact’ to represent them posthumously.

The closure of the platform, however, opens an entirely new level of ethical harm, particularly in the (unlikely but not impossible) case of bankruptcy or insolvency. Such a scenario would likely force Facebook to sell off its assets to the highest bidder. However, unlike the sale or transfer of data of living users, which under the GDPR and EU insolvency law requires users’ informed consent, there is no corresponding protection for the sale of deceased users’ data in insolvency, such as requiring the consent of their next of kin. 20 Moreover, there are no limitations on who could purchase these data and for what purposes. For example, a deceased person’s adversaries could acquire their Facebook data in order to compromise their privacy or tarnish their reputation posthumously. Incidents of this kind have already been reported on Twitter, where the profiles of deceased celebrities have been hacked and used to spread propaganda. 21  The profiles of deceased users may also remain commercially valuable and attractive to third party purchasers — for instance, by providing insights on living associates of the deceased, such as their friends and relatives. As in genealogy — where one individual’s DNA also contains information about their children, siblings and parents — one person’s data may similarly be used to predict another’s behaviour or dispositions (see Creet [2019] on the relationship between genealogy websites and big pharma).

In sum, the demise of a platform with Facebook’s global and societal significance is not only a concern for those who use, or have used it directly, but also for individuals who are indirectly affected by its omnipresence in society.

Future generations

It is also important to consider indirect harms arising from Facebook’s potential closure due to missed opportunities . The most important stakeholders to consider in this respect are future generations, which, much like deceased users, are seldom directly protected in law. By ‘future generations’ we refer mainly to future historians and sociologists studying the origins and dynamics of digital society, but also to the general public and their ability to access their shared digital cultural heritage.

It is widely accepted that the open web holds great cultural and historical value (Rosenzweig, 2003), and thus several organisations — perhaps most notably the Internet Archive’s Way Back Machine 22 — as well as researchers (Brügger and Schroeder, 2017) are working to preserve it. Personal data, however, have received less attention. Although (most) individual user data may be relatively inconsequential for historical, scientific and cultural purposes, the aggregate Facebook data archive amounts to a digital artefact of considerable significance. The personal digital heritage of each Facebook user is, or will become, part of our shared cultural digital heritage (Cameron and Kenderdine, 2007). As Varnado writes:

Many people save various things in digital format, and if they fail to alert others of and provide access to those things, certain memories and stories of their lives could be lost forever. This is a loss not only for a descendant’s legacy and successors but also for society as a whole. […] This is especially true of social networking accounts, which may be the principal—and eventually only—source for future generations to learn about their predecessors (Varnado, 2014, p. 744)

Not only is Facebook becoming a significant digital cultural artefact, it is arguably the first such artefact to have truly global proportions. Indeed, Facebook is by far the largest archive of human behaviour in history. As such, it can legitimately be said to hold what Appiah (2006) calls ‘cosmopolitan value’ — that is, something that is significant enough to be part of the narrative of our species. Given its global reach, and thus its interest to all of human kind (present and future), this record can even be thought of as a form of future public good (Waters, 2002, p. 83), without which we risk falling into a ‘digital dark age’ (Kuny, 1998; Smit et al., 2011) — a state of ignorance of our digital past.

The concentration of digital cultural heritage in a single (privately controlled and corporate) platform is in and of itself problematic, especially in view of the risk of Facebook monopolising private and collective history (Öhman and Watson, 2019). These socio-political concerns are magnified in the context of the platform’s demise. For such a scenario poses a threat not only to the control or appraisal of digital cultural heritage, but also to its very existence — by decompartmentalising the archive, thus destroying its global significance, and/or by destroying it entirely due to lack of commercial or other interest in preserving it.

These risks are most acute in an insolvency scenario, where, as discussed above, the data are more likely to be deleted or sold to third parties, including by being split up among a number of different data controllers. Although such an outcome may be viewed as a positive development in terms of decentralising Facebook’s power (Öhman and Watson, 2019), it also risks dividing and therefore diluting the global heritage and cosmopolitan value held within the platform. Worse still would be a scenario in which cosmopolitan value is destroyed due to a lack of, or divergent, commercial interests in purchasing Facebook’s data archives, or indeed the inability to put a price on these data due to the absence of agreed upon accounting rules over a company’s (big) data assets (Lyford-Smith, 2017). The recent auction of Cambridge Analytica’s assets in administration, where the highest bid received for the company’s business and intellectual property rights (assumed to include the personal data of Facebook users) was a mere £1, is a sobering illustration of these challenges. 23  

However, our concerns are not limited to an insolvency scenario. In the more plausible scenario of Facebook closing the shutters on one of its products, such as the main platform website and app, the archive assembled by the product would no longer be accessible as such to either the public or future generations, even though the data and insights would likely continue to exist and be utilised within the Facebook Inc. group of companies ( inter alia , to provide insights on users of other products such as Instagram and Messenger).

Recommendations

The stakeholders presented above, and the harms to which they are exposed, occupy the ethical landscape in which legal and policy measures to manage Facebook’s closure must be shaped. Although it is premature to propose definitive solutions, in this section we offer four broad recommendations for future policy and research in this area. These recommendations are by no means intended to be coherent solutions to “the” problem of big tech closure, but rather are posed as a starting point for further debate.

Develop a regulatory framework for Systemically Important Technological Institutions.

As examined earlier, many societies around the world have become ever-more dependent on digital communication and commerce through Big Tech platforms such as Facebook and would be harmed by their (disorderly) demise. Consider, for instance, the implications of a sudden breakdown of these platforms in times of crisis like the COVID-19 pandemic. As such, there are compelling reasons to regulate these platforms as systemically important institutions. By way of analogy to the SIFI concept — that is, domestic or global financial institutions and financial market infrastructures whose failure is anticipated to have adverse consequences for the rest of the financial system and the wider economy (FSB, 2014) — we thus propose that a new concept of systemically important technological institution, or ‘SITI’, be given more serious consideration. 

The regulatory framework for SITIs should draw on existing approaches to regulating SIFIs, critical national infrastructures and public utilities, respectively. In the insolvency context, drawing upon best practices for SIFI resolution, the SITI regime could include measures to fast-track insolvency proceedings in order to facilitate the orderly wind-down or reorganisation of a failing SITI in a way that minimises disruption to the (essential) services that it provides, thus mitigating harm to dependent communities. This might include resolution powers vested in a regulatory body authorised to supervise SITIs (this could be an existing body, such as the national competition or consumer protection/trade agency, or a newly established ‘Tech’ regulator) — including the power to mandate a SITI, such as Facebook, to continue to provide ‘essential services’ to dependent communities — for example, access to user groups or messaging apps — or else facilitate the transfer of these services to an alternative provider. 

In this way, SITIs would be subject to public obligations similar to those imposed on regulated public utilities, such as water and electricity companies — as “private companies that control infrastructural goods” (Rahman, 2018) — in order to prevent harm to dependent communities. 24 Likewise, the SITI regime should include obligations for failure planning (by way of analogy to ‘resolution and recovery planning’ under the SIFI regime). In the EU, this regime should also build on the regulatory framework for ‘essential services’, specifically essential ‘digital service providers’, under the EU NIS (Network and Information Systems) Directive, 25 which focuses on managing and mitigating cyber security risks to critical national infrastructures.

Whilst the fine print of the SITI regulatory regime requires further deliberation — indeed, the analogy with SIFIs and public utilities has evident limitations — we hope this article will help incite discussions to that end.

Strengthen the legal mechanisms for users to control their own data in cases of platform insolvency or closure.

Existing data protection laws are insufficient to protect Facebook users from the ethical harms that could arise from the handling of their data in the event of the platform’s closure. As we have highlighted, the nature of ‘Big Data’ is such that even if users object to the deletion or sale of their data, and request their return, Facebook would be unable as a practical matter to fully satisfy that request. As a result, users face ethical harm where their data is used against their will, in ways that could undermine their privacy, dignity and self-identity.

This calls for new data protection mechanisms that give Facebook users better control over their data. Potential solutions include creating new regulatory obligations for data controllers to segregate user data, in particular as between different Facebook subsidiaries (e.g., the main platform and Instagram), where data are currently commingled. 26 This would allow users to more effectively retrieve their data were Facebook to shut down and could offer a more effective way of protecting the interests of ad hoc ‘algorithmic’ groups (Mittelstadt, 2017). However, to the extent that segregating data in this way undermines the economies of scale that facilitate Big Data analysis, it could have the unintended effect of reducing the benefits that users gain from the Facebook platform, inter alia through personalised recommendations. 

Additionally, or alternatively, further consideration should be given to the concept of ‘data trusts’, as a bottom-up form of data governance and control by users (Delacroix & Lawrence, 2019). Under a data trust structure, Facebook would act as a trustee for user data, holding them on trust for the user(s) — as the settlor(s) and beneficiary(ies) of the trust — and managing and sharing the data in accordance with their instructions. Moreover, a plurality of trusts can be developed, for example, designed around specified groups of aggregated data (in order to leverage the economies of scope and scale of large, combined data sets). As a trustee, Facebook would be subject to a fiduciary duty to only use the data in ways that serve the best interests of the user (see further Balkin, 2016). As such, a data trust structure could provide a stronger legal mechanism for safeguarding the wishes of users with respect to their data as compared to the existing standard of ‘informed consent’. Another possible solution involves decentralising the ownership and control of user data, for example using distributed ledger technology. 27  

Strengthen legal protection for the data and privacy of deceased users.

Although the interests of non-users as a group need to be given serious consideration, we highlight the privacy of deceased users as an area in particular need of protection. We recommend that more countries follow the lead of Denmark in implementing legislation that, at least to some degree, protects the profiles of deceased users from being arbitrarily sold, mined and disseminated in the case of Facebook’s closure. 28 Such legislation could follow several different models. Perhaps the most intuitive option is to simply enshrine the privacy rights of deceased users in data protection law, such as (in the EU) the GDPR. This can either be designed as a personal (but time-limited) right (as in Denmark), or a right bestowed upon next of kin (as in Spain and Italy). It could also be shaped by extending copyright law protection (Harbinja, 2017) or take place within what Harbinja (2013, p. 20) calls a ‘human rights-based regime’, (see also Bergtora Sandvik, 2020), i.e. as a universal and inviolable right. Alternatively, it could be achieved by designating companies such as Facebook as ‘information fiduciaries’ (Balkin, 2016), pursuant to which they have a duty of care to act in the best interests of users with respect to their data, including posthumously.

The risk of ethical harm to deceased users or customers in the event of corporate demise is not limited to the closure of Facebook, or Big Tech (platforms). Although Facebook will likely be the single largest holder of deceased profiles in the 21 st century, other social networks (LinkedIn, WeChat, YouTube etc.) are also likely to host hundreds of millions of deceased profiles within only a few decades. And as more sectors of the economy become digitised, any company holding customer data will eventually hold a large volume of data relating to deceased subjects. As such, developing more robust legal protection for the data privacy rights of the deceased is important for mitigating the ethical harms due to corporate demise, broadly defined. 

However, for obvious reasons, deceased data subjects have little political influence, and are thus unlikely to become a top priority to policy makers. Moreover, any legislative measures to protect their privacy are likely to be adopted at national or regional levels first, although the problem inevitably remains global in nature. A satisfactory legislative response may therefore take significant time and political effort to develop. Facebook should therefore be encouraged to specify how they intend to handle deceased users’ data upon closure in their terms of service, and in particular commit not to sell those data to a third party where this would not be in the best interests of said users. While this private approach may not have the same effectiveness and general applicability as national or regional legislation protecting deceased user data, it would provide an important first step.

Create stronger incentives for Facebook to share insights and preserve historically significant data for future generations.

Future generations cannot directly safeguard their interests and thus it is incumbent on us to do so. Given the societal, historical and cultural interest in preserving, or at least averting the complete destruction of Facebook’s cultural heritage, stronger incentives need to be created for Facebook to take responsibility and begin acknowledging the global historical value of its data archives.

A promising strategy would be to protect Facebook’s archive as a site of digital global heritage, drawing inspiration from the protection of physical sites of global cultural heritage, such as through UNESCO World Heritage protected status. 29 Pursuant to Article 6.1 of the Convention Concerning the Protection of World Cultural and Natural Heritage (UNESCO, 1972), state parties acknowledge that, while respecting the sovereignty of the state territory, their national heritage may also constitute world heritage, which falls within the interests and duties of the ‘international community’ to preserve. Meanwhile, Article 4 stipulates that:

Each State Party to this Convention recognizes that the duty of ensuring the identification, protection, conservation, presentation and transmission to future generations of the cultural and natural heritage […] situated on its territory, belongs primarily to that State. It will do all it can to this end, to the utmost of its own resources and, where appropriate, with any international assistance and co-operation, in particular, financial, artistic, scientific and technical, which it may be able to obtain. (UNESCO, 1972, Art. 4)

A digital version of this label may similarly entail acknowledgement by data controllers of, and a pledge to preserve, the cosmopolitan value of their data archive, while allowing them to continue using the archive. However, in contrast to physical sites and material artefacts, which fall under the control of sovereign states, the most significant digital artefacts in today’s world are under the control of Big Tech companies, like Facebook. As such, there is reason to consider a new international agreement between corporate entities, in which they pledge to protect and conserve the global cultural heritage on their platforms. 30

However, bestowing the label of global digital heritage does not resolve the question of access to this heritage. Unlike Twitter, which in 2010 attempted to donate its entire archive to the Library of Congress, 31 Facebook’s archive arguably contains more sensitive, personal information about its users. Moreover, these data offer the company more of a competitive advantage compared to Twitter (the latter’s user accounts are public, in contrast to Facebook, where many of the profiles are visible only to friends of the user). These considerations could reduce Facebook’s readiness to grant public access to its archives. Nevertheless, safeguarding the existence of Facebook’s records and its historical significance remains an important first step in making it accessible to future generations.

It goes without saying that the interests of future generations will at times conflict with the interests of the other three ethical stakeholders we have identified. As Mazzone (2012, p. 1660) points out, ‘the societal interest in preserving postings to social networking sites for future historical study can be in tension with the privacy interests of individual users.’ Indeed, Facebook’s data are proprietary, and any interventions must respect its rights in the data as well as the privacy rights of users. Yet, the mere fact that there are conflicts of interests and complexities does not mean that the interests of future generations ought to be neglected altogether.

For the foreseeable future, Facebook’s demise remains a high risk, low probability event. However, mapping out the legal and ethical landscape for such an eventuality, as we have done in this article, allows society to better manage the fallout should this scenario materialise. Moreover, our analysis helps to shed light on lower risk but higher probability scenarios. Companies regularly fail and disappear — increasingly taking with them troves of customer-user data that receive only limited protection and attention under existing law. The legal and ethical harms that we have identified in this article, many of which flow from the use of data following Facebook’s closure, are thus equally relevant to the closure of other companies, albeit on a smaller scale. Regardless of which data-rich company is the next to go, we must make sure that an adequate governance framework is in place to minimise the systemic and individual damage. Our hope is that this article will help kickstart a debate and further research on these important issues.

Acknowledgements

We are deeply grateful to Luciano Floridi, David Watson, Josh Cowls, Robert Gorwa, Tim R Samples, and Horst Eidenmüller for valuable feedback and input. We would also like to add a special thanks to reviewers James Meese and Steph Hill, and editors Frédéric Dubois and Kris Erickson for encouraging us to further improve this manuscript.

Acker, A., & Brubaker, J. R. (2014). Death, memorialization, and social media: A platform perspective for personal archives. Archivaria , 77 , 2–23. https://archivaria.ca/index.php/archivaria/article/view/13469

Aguilar, A. (2015). The global economic impact of Facebook: Helping to unlock new opportunities [Report]. Deloitte. https://www2.deloitte.com/uk/en/pages/technology-media-and-telecommunications/articles/the-global-economic-impact-of-facebook.html

Aplin, T., Bentley, L., Johnson, P., & Malynicz, S. (2012). Gurry on breach of confidence: The protection of confidential information . Oxford University Press.

Appiah, K. A. (2006). Cosmopolitanism: Ethics in a world of strangers . Penguin.

Balkin, J. (2016). Information fiduciaries and the first amendment. UC Davis Law Review , 49 (4), 1183–1234. https://lawreview.law.ucdavis.edu/issues/49/4/Lecture/49-4_Balkin.pdf

Baser, D. (2018, April 16). Hard questions: What data does Facebook collect when I’m not using Facebook, and why? [Blog post]. Facebook Newsroom . https://newsroom.fb.com/news/2018/04/data-off-facebook/

Bergtora Sandvik, K. (2020). Digital dead body management (DDBM): Time to think it through. Journal of Human Rights Practice , uaa002 . https://doi.org/10.1093/jhuman/huaa002

boyd, d. (2013). White flight in networked publics? How race and class shaped american teen engagement with MySpace and facebook. In L. Nakamura & P. Chow-White (Eds.), Race after the internet .

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian . https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Cannarella, J., & Spechler, J. (2014). Epidemiological Modelling of Online Social Network Dynamics. ArXiv . https://arxiv.org/pdf/1401.4208.pdf

Competition & Markets Authority. (2020). Online Platforms and Digital Advertising (Market Study) [Final report]. Competition & Markets Authority. https://assets.publishing.service.gov.uk/media/5efc57ed3a6f4023d242ed56/Final_report_1_July_2020_.pdf

Creet, J. (2019). Data mining the deceased: Ancestry and the business of family [Documentary]. https://juliacreet.vhx.tv/

DataReportal. (2019). Global digital overview . https://datareportal.com/?utm_source=Statista&utm_medium=Data_Citation_Hyperlink&utm_campaign=Data_Partners&utm_content=Statista_Data_Citation

Delacroix, S., & Lawrence, N. D. (2019). Disturbing the ‘One size fits all’ approach to data governance: Bottom-up. International Data Privacy Law , 9 (4), 236–252. https://doi.org/10.1093/idpl/ipz014

Di Cosmo, R., & Zacchiroli, S. (2017). Software heritage: Why and how to preserve software source code. iPRES 2017 – 14th international conference on digital preservation . 1–10.

F, C., & S, K. (2007). Theorizing digital cultural heritage: A critical discourse . MIT Press.

Facebook. (2017). Form 10-K annual report for the Fiscal Period ended December 31, 2017 .

Facebook. (2018). Form 10-K annual report for the fiscal period ended december 31, 2018 .

Facebook. (2019, June 18). Coming in 2020: Calibra [Blog post]. Facebook Newsroom . https://about.fb.com/news/2019/06/coming-in-2020-calibra/

Facebook. (2020). Form 10-Q quarterly report for the quarterly period ended March 31, 2020 .

Federal Trade Commission. (2019, July 24). FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook [Press Release]. News & Events . https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions

Financial Stability Board. (2014). Key attributes of effective resolution regimes for financial institutions ¡ . https://www.fsb.org/wp-content/uploads/r_141015.pdf

Floridi, L. (2011). The informational nature of personal identity. Minds and Machines , 21 (4), 549–566. https://doi.org/10.1007/s11023-011-9259-6

Floridi, L. (2012). Distributed morality in an information society. Science and Engineering Ethics , 19 (3), 727–743. https://doi.org/10.1007/s11948-012-9413-4

Furman, J. (2019). Unlocking digital competition [Report]. Digital Competition Expert Panel. https://www.gov.uk/government/publications/unlocking-digital-competition-report-of-the-digital-competition-expert-panel

Garcia, D., Mavrodiev, P., & Schweitzer, F. (2013). Social resilience in online communities: The autopsy of Friendster. Proceedings of the First ACM Conference on Online Social Networks (COSN ’13) . https://doi.org/10.1145/2512938.2512946.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society , 22 (6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914

Harbinja, E. (2013). Does the EU data protection regime protect post-mortem privacy and what could be the potential alternatives? Scripted , 10 (1). https://doi.org/10.2966/scrip.100113.19

Harbinja, E. (2014). Virtual worlds—A legal post-mortem account. Scripted , 11 (3). https://doi.org/10.2966/scrip.110314.273

Harbinja, E. (2017). Post-mortem privacy 2.0: Theory, law, and technology. International Review of Law, Computers & Technology , 31 (1), 26–42. https://doi.org/10.1080/13600869.2017.1275116

Howard, P. N., & Hussain, M. M. (2013). Democracy’s fourth wave? Digital media and the arab spring . Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199936953.001.0001

Information Commissioner’s Office. (2019, October). Statement on an agreement reached between Facebook and the ICO [Statement]. News and Events . https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2019/10/statement-on-an-agreement-reached-between-facebook-and-the-ico

Kittikhoun, A. (2019). Mapping the extent of Facebook’s role in the online media landscape of Laos [Master’s dissertation.]. University of Oxford, Oxford Internet Institute.

Kuny, T. (1998). A digital dark ages? Challenges in the preservation of electronic information. International Preservation News , 17 (May), 8–13. https://doi.org/Article

Lyford-Smith, D. (2017). Data as an Asset. ICAEW ¡ . https://www.icaew.com/technical/technology/data/data-analytics-and-big-data/data-analytics-articles/data-as-an-asset

Marcus, D. (2020, May). Welcome to Novi [Blog post]. Facebook Newsroom . https://about.fb.com/news/2020/05/welcome-to-novi/

Mazzone, J. (2012). Facebook’s afterlife. North Carolina Law Review , 90 (5), 1643–1685.

Mirani, L. (2015). Millions of Facebook users have no idea they’re using the internet. Quartz . https://qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-the-internet/

M.I.T. (2013). An autopsy of a dead social network ¡ . https://www.technologyreview.com/s/511846/an-autopsy-of-a-dead-social-network/

Mittelstadt, B. (2017). From Individual to Group Privacy in Big Data Analytics. Philos. Technol , 30 , 475–494. https://doi.org/10.1007/s13347-017-0253-7

N, B., & R, S. (Eds.). (2017). The web as history: Using web archives to understand the past and the present . UCL Press.

Öhman, C., & Floridi, L. (2018). An ethical framework for the digital afterlife industry. Nature Human Behaviour . https://doi.org/10.1038/s41562-018-0335-2

Öhman, C. J., & Watson, D. (2019). Are the dead taking over Facebook? A Big Data approach to the future of death online. Big Data & Society , 6 (1), 205395171984254. https://doi.org/10.1177/2053951719842540

Open Data Institute. (2018, July 10). What is a Data Trust? [Blog post]. Knowledge & opinion blog . https://theodi.org/article/what-is-a-data-trust/#1527168424801-0db7e063-ed2a62d2-2d92

Piper Sandler. (2020). Taking stock with teens, spring 2020 survey . Piper Sandler. http://www.pipersandler.com/3col.aspx?id=5956

Piskorski, M. J., & Knoop, C.-I. (2006). Friendster (A) [Case Study]. Harvard Business Review.

Rahman, K. S. (2018). The new utilities: Private power, social infrastructure, and the revival of the public utility concept. Cardozo Law Review , 39 (5), 1621–1689. http://cardozolawreview.com/wp-content/uploads/2018/07/RAHMAN.39.5.2.pdf

Rosenzweig, R. (2003). Scarcity or abundance? Preserving the past in a digital era. The American Historical Review , 108 (3), 735–762. https://doi.org/10.1086/ahr/108.3.735

Scarre, G. (2013). Privacy and the dead. Philosophy in the Contemporary World , 19 (1), 1–16. https://doi.org/10.1063/1.2756072

Seki, K., & Nakamura, M. (2016). The collapse of the Friendster network started from the center of the core. 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) , 477–484. https://doi.org/10.1109/ASONAM.2016.7752278

Simon, T. W. (1995). Group harm. Journal of Social Philosophy , 26 (3), 123–138. https://doi.org/10.1111/j.1467-9833.1995.tb00089.x

Smit, E., Hoeven, J., & Giaretta, D. (2011). Avoiding a digital dark age for data: Why publishers should care about digital preservation. Learned Publishing , 24 (1), 35–49. https://doi.org/10.1087/20110107

Stokes, P. (2015). Deletion as second death: The moral status of digital remains. Ethics and Information Technology , 17 (4), 1–12. https://doi.org/10.1007/s10676-015-9379-4

Taylor, J. S. (2005). The myth of posthumous harm. American Philosophical Quarterly , 42 (4), 311–322. https://www.jstor.org/stable/20010214

Tencent. (2019). Q2 earnings release and interim results for the period ended June 30, 2019 .

Thacker, D. (2018, December 10). Expediting Changes to Google+ [Blog post]. Google . https://blog.google/technology/safety-security/expediting-changes-google-plus/

Torkjazi, M., Rejaie, R., & Willinger, W. (2009). Hot today, gone tomorrow: On the migration of MySpace users. Proceedings of the 2nd ACM Workshop on Online Social Networks - WOSN ’09 , 43. https://doi.org/10.1145/1592665.1592676

U. K. Government. (2019). Online harms [White Paper]. U.K. Government, Department for Digital, Culture, Media & Sport; Home Department. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf

UNESCO. (1972). Convention concerning the Protection of the World Cultural and Natural Heritage. Adopted by the General Conference at its seventeenth session Paris, November 16.

Varnado, A. S. S. (2014). Your digital footprint left behind at death: An illustration of technology leaving the law behind. Louisiana Law Review , 74 (3), 719–775. https://digitalcommons.law.lsu.edu/lalrev/vol74/iss3/7

Warren, E. (2019). Here’s How We Can Break Up Big Tech [Medium Post]. Team Warren . https://medium.com/@teamwarren/heres-how-we-can-break-up-big-tech-9ad9e0da324c

Waters, D. (2002). Good archives make good scholars: Reflections on recent steps toward the archiving of digital information. In The state of digital preservation: An international perspective (pp. 78–95). Council on Library and Information Resources. https://www.clir.org/pubs/reports/pub107/waters/

York, C., & Turcotte, J. (2015). Vacationing from facebook: Adoption, temporary discontinuance, and readoption of an innovation. Communication Research Reports , 32 (1), 54–62. https://doi.org/10.1080/08824096.2014.989975

Zuckerberg, M. (2019, March 6). A privacy-focused vision for social networking [Post]. https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/

1. Unless otherwise stated, references to ‘Facebook’ are to the main platform (comprising News Feed, Groups and Pages, inter alia , both on the mobile app as well as the website), and do not include the wider group of companies that comprise Facebook Inc, namely WhatsApp, Messenger, Instagram, Oculus (Facebook, 2018), and Calibra (recently rebranded as Novi Financial) (Marcus, 2019; 2020).

2. See https://www.washingtonpost.com/news/the-intersect/wp/2015/02/12/8-throwback-sites-you-thought-died-in-2005-but-are-actually-still-around/

3. See https://qz.com/1408120/yahoo-japan-is-shutting-down-its-website-hosting-service-geocities/

4. Regulation (EU) 2016/679 < https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG> .

5. California Legislature Assembly Bill No. 375 < https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375 >

6. See < https://www.politico.com/news/2020/07/06/trump-parler-rules-349434 >

7. See < https://www.nytimes.com/2020/06/29/business/dealbook/facebook-boycott-ads.html >.

8. We adopt an inclusive definition of ethical harm (henceforth just ‘harm’) as any encroachment upon personal or collective and legitimate interests such as dignity, privacy, personal welfare, and freedom.  

9. Naturally, not all communities with a Facebook presence can be included in this category. For example, the lost marketing opportunities for large multinational corporations such as Coca Cola Inc., due to the sudden demise of Facebook, cannot be equated with the harm to a small-scale collective of sole traders in a remote area (e.g., a local craft or farmers’ market) whose only exposure to customers is through the platform. By ‘dependent communities’ we thus refer only to communities whose ability to flourish and survive may be threatened by Facebook’s sudden demise.

10. See https://info.internet.org/en/impact/

11. See https://help.yahoo.com/kb/understand-data-downloaded-yahoo-groups-sln35066.html

12. See Art 20 GDPR. 

13. See Art 4(2) GDPR (defining ‘processing’ to include, inter alia , ‘erasure or destruction’ of personal data).

14. See Google Help, (2019) ‘Shutting down Google+ for consumer (personal) accounts on April 2, 2019’ https://support.google.com/plus/answer/9195133?hl=en-GB . Facebook states in its data policy that ‘We store data until it is no longer necessary to provide our services and Facebook Products or until your account is deleted — whichever comes first’, which might suggest that users provide their consent to future deletion of their data when they first sign up to Facebook. However, it is unlikely that this clause substitutes for the requirement to obtain specific and unambiguous consent to data processing, for specific purposes — including deletion of data — under the GDPR (see Articles 4(11) and 6(1)(a)).

15. See Art 17 GDPR.

16. Facebook’s policy on deceased users has changed somewhat over the years, but the current approach is to allow next of kin to either memorialise or permanently delete the account of a confirmed deceased user (Facebook, n.d.). Users are also encouraged to select a ‘legacy contact’, that is, a second Facebook user who will act as a custodian in the event of their demise. Although these technical solutions have proven to be successful on an individual, short-term level, several long-term problems remain unsolved. In particular, what happens when the legacy contact themselves dies? For how long will it be economically viable to store hundreds of millions of deceased profiles on the servers?

17. However, note that the information of a deceased subject can continue to be protected by the right to privacy under Art 8 of the European Convention on Human Rights, and the common law of confidence with respect to confidential personal information (although the latter is unlikely to apply to data processing by Facebook) (see generally Aplin et al., 2012).

18.  Several philosophers and legal scholars have recently argued for the concept of posthumous privacy to be recognised (see Scarre [2014, p. 1], Stokes [2015] and Öhman & Floridi [2018]). 

19.  Recital 27 of the GDPR clearly states that ‘[t]his Regulation does not apply to the personal data of deceased persons’, however does at the same time allow member states to make additional provision for this purpose. Accordingly, a few European countries have included privacy rights for deceased data subjects in their implementing laws (for instance, Denmark, Spain and Italy — see https://www.twobirds.com/en/in-focus/general-data-protection-regulation/gdpr-tracker/deceased-persons .) However, aside from these limited cases, existing data protection for the deceased is alarmingly sparse across the world. 

20. Under EU insolvency law, any processing of personal data (for example, deletion, sale or transfer of the data to a third party purchaser) must comply with the GDPR (See Art 78 (Data Protection) of EU Regulation 2015/848 on Insolvency Proceedings (recast). However, see endnote 17 with regard to the right to privacy and confidentiality.

21.  See https://www.alaraby.co.uk/english/indepth/2019/2/25/saudi-trolls-hacking-dead-peoples-twitter-to-spread-propaganda

22.  See https://archive.org/web/

23. See Administrator’s Progress Report (2018) https://beta.companieshouse.gov.uk/company/09375920/filing-history . However, consumer data (for example, in the form of customer loyalty schemes) has been valued more highly in other corporate insolvencies (see for example, the Chapter 11 reorganisation of the Caesar’s Entertainment Group https://digital.hbs.edu/platform-digit/submission/caesars-entertainment-what-happens-in-vegas-ends-up-in-a-1billion-database/ ).

24. There is a broader call, from a competition (antitrust) policy perspective, to regulate Big Tech platforms as utilities on the basis that these platforms tend towards natural monopoly (see, e.g. Warren, 2019). Relatedly, the UK Competition and Markets Authority has recommended a new ‘pro-competition regulatory regime’ for digital platforms, such as Google and Facebook, that have ‘strategic market status’ (Furman, 2019; CMA, 2020). The measures proposed under this regime — such as facilitating interoperability between social media platforms— would also help to mitigate the potential harms to Facebook’s ethical stakeholders due to its closure.

25. Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union OJ L 194, 19.7.2016.

26.  Facebook has stated that financial data collected by Calibra/Novi, the digital wallet for Libra cryptocurrency, will not be shared with Facebook or third parties without user consent (Facebook 2019b). The segregation of user data is the subject of a ruling by the German Competition Authority, however this was overturned on appeal by Facebook (and is now being appealed by the competition authority — the original decision is here: https://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2019/07_02_2019_Facebook.html ).

27. A related imperative is to clarify the financial accounting rules for the valuation of (Big) data assets, including in an insolvency context.

28. See s 2(5) of the Danish Data Protection Act 2018 < https://www.datatilsynet.dk/media/7753/danish-data-protection-act.pdf >

29. UNESCO has previously initiated a project to preserve source code (see Di Cosmo R and Zacchiroli, 2017).

30.  This could be formal or informal, for example in the vein of the ‘Giving Pledge’ — a philanthropic initiative to encourage billionaires to give away the majority of their wealth in their lifetimes (see < https://givingpledge.org/> ).

31.  Although the initiative has ceased to operate as originally planned, it remains one of the best examples of large scale social media archiving (see https://www.npr.org/sections/thetwo-way/2017/12/26/573609499/library-of-congress-will-no-longer-archive-every-tweet ). 

BG Davis A large bibliography and

A large bibliography and copious footnotes fail to obscure the extremely speculative nature of this paper. One is reminded of Fukuyama's absurd forecasts in his "End of History" essay. This sort of paper is wonderful for enhancing the publication list of the authors. However, it contributes little to the sum total of useful or applicable knowledge. In reality, no economy, nation or community is dependent on Facebook. Facebook is utterly non-essential. Its disappearance would cause some disruption to some limited groups and sectors for a limited period of time.

STEWART PEARSON This is an important topic

This is an important topic but the paper lacks rigor and perpetuates myths. (1) The issue of data ownership is urgent not because FB might cease to exist, but because it is a violation of human rights. (2) The data stored by FB is not an essential archive, but represents a biased and polarized profile of an unrepresentative segment. (3) The value of its data is real, but only because current, and will rapidly evaporate.(4) FB's attempts to show economic value are not proven, but the decline of journalism and start-ups under the FB and Google duopoly is a fact. (5) FB cannot even prove value to the advertisers who fund 98% of its revenue; but again that the duopoly has extracted value from leading brands, from Coca-Cola down to SMB's, is another fact.

B.R. Although, for some unknown

Although, for some unknown reasons, the nature of the connection between Facebook and the Oxford Internet Institute isn't clearly stated here, the reader might be interested to learn that the Institute receives annual funding from Facebook. This can be verified by visiting the website of the OII (see about > Giving to the OII web page). Facebook also directly provides millions of funding to other research projects at Oxford.

This should be kept in mind while reading the above recommendations and the speculative depiction of the demise of Facebook.

It is also interesting to note that the dramatic and less nuanced characterisation of a Facebook 'collapse' by mainstream news articles (e.g. The Times or the Telegraph) referencing this paper have been retweeted by the OII. It should also be noted that, contrary to what is being described in the paper as one of the major factors of a possible dramatic closure or damaging transformation of the Facebook social platform, the regulatory pressure and potential anti-trust measures pushed by governments, such as a mandatory break up of facebook Inc. operations and services would not necessarily lead to such consequences. This is only one possible scenario. We are indeed in the realm of speculations here and caution about causal likelihood are required; at least if impartiality is a genuine concern.

Add new comment

Copy to clipboard

Adjusts contrasts, text, and spacing in order to improve legibility for people with dyslexia.

Contrasts, text, and spacing are adjusted in order to improve legibility for people with dyslexia. Also, links look like this and italics like this . Font is changed to Atkinson Hyperlegible

Is this feature helpful for you, or could the design be improved? If you have feedback please send us a message .

  • Social media
  • Data protection
  • Digital ethics

Related Articles

Making sense of data ethics. the powers behind the data ethics debate in european policymaking.

Data ethics has gained traction in policy-making. The article presents an analytical investigation of the different dimensions and actors shaping data ethics in European policy-making.

Big data and democracy: a regulator’s perspective

This commentary is part of Data-driven elections , a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon. Introduction: all roads lead to Victoria, British Columbia As the Information and Privacy Commissioner for British Columbia, I am entrusted with enforcing the province’s two pieces of privacy legislation –

Data governance: a forum on Europe’s specificity (or the need for one)

The European Data Governance Forum taking place this week galvanised two core ethical principles, reports Francesca Musiani.

Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad

This paper examines data and privacy governance by four China-based mobile applications and their international versions - including the role of the state. It also highlights the role of platforms in gatekeeping mobile app privacy standards.

Online privacy concerns and legal assurance: A user perspective

Do users care about privacy? And if so: Will legal assurances help? Dr. Hanna Krasnova and Paula Kift summarize the findings of their quantitative study recently conducted among German students.

Internet Policy Review is an open access and peer-reviewed journal on internet regulation.

peer reviewed

Not peer reviewed.

SCImago Journal & Country Rank

email Subscribe NEWSLETTER

facebook business ethics case study

  • International edition
  • Australia edition
  • Europe edition

A Facebook page

Facebook emotion study breached ethical guidelines, researchers say

Lack of 'informed consent' means that Facebook experiment on nearly 700,000 news feeds broke rules on tests on human subjects, say scientists

Poll: Facebook's secret mood experiment: have you lost trust in the social network?

Researchers have roundly condemned Facebook's experiment in which it manipulated nearly 700,000 users' news feeds to see whether it would affect their emotions, saying it breaches ethical guidelines for "informed consent".

James Grimmelmann, professor of law at the University of Maryland, points in an extensive blog post that "Facebook didn't give users informed consent" to allow them to decide whether to take part in the study, under US human subjects research.

"The study harmed participants," because it changed their mood, Grimmelmann comments, adding "This is bad, even for Facebook."

But one of the researchers, Adam Kramer, posted a lengthy defence on Facebook , saying it was carried out "because we care about the emotional impact of Facebook and the people that use our product." He said that he and his colleagues "felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out."

The experiment hid certain elements from 689,003 peoples' news feed – about 0.04% of users, or 1 in 2,500 – over the course of one week in 2012. The experiment hid "a small percentage" of emotional words from peoples' news feeds, without their knowledge, to test what effect that had on the statuses or "Likes" that they then posted or reacted to.

The results found that, contrary to expectation, peoples' emotions were reinforced by what they saw - what the researchers called "emotional contagion".

But the study has come in for severe criticism because unlike the advertising that Facebook shows - which arguably aims to alter peoples' behaviour by making them buy products or services from those advertisers - the changes to the news feeds were made without users' knowledge or explicit consent.

Max Masnick, a researcher with a doctorate in epidemiology who says of his work that "I do human-subjects research every day", says that the structure of the experiment means there was no informed consent - a key element of any studies on humans.

"As a researcher, you don’t get an ethical free pass because a user checked a box next to a link to a website’s terms of use. The researcher is responsible for making sure all participants are properly consented. In many cases, study staff will verbally go through lengthy consent forms with potential participants, point by point. Researchers will even quiz participants after presenting the informed consent information to make sure they really understand.

"Based on the information in the PNAS paper, I don’t think these researchers met this ethical obligation."

Kramer does not address the topic of informed consent in his blog post. But he says that "my co-authors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety."

When asked whether the study had had an ethical review before being approved for publication, the US National Academy of Sciences, which published the controversial paper in its Proceedings of the National Academy of Sciences (PNAS), told the Guardian that it was investigating the issue.

  • Social networking

More on this story

facebook business ethics case study

Facebook faces criticism amid claims it breached ethical guidelines with study

facebook business ethics case study

Facebook T&Cs introduced 'research' policy months after emotion study

facebook business ethics case study

Facebook's emotion study: yet another reason for distrust

facebook business ethics case study

Facebook fiasco: was Cornell's study of ‘emotional contagion’ an ethics breach?

facebook business ethics case study

Facebook reveals news feed experiment to control emotions

facebook business ethics case study

Facebook deliberately made people sad. This ought to be the final straw

Comments (…), most viewed.

facebook business ethics case study

Consent and ethics in Facebook’s emotional manipulation study

facebook business ethics case study

Associate Professor of Medical Ethics, Flinders University

Disclosure statement

David Hunter does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Flinders University provides funding as a member of The Conversation AU.

View all partners

Significant concerns are raised about the ethics of research carried out by Facebook after it revealed how it manipulated the news feed of thousands of users.

In 2012 the social media giant conducted a study on 689,003 users, without their knowledge, to see how they posted if it systematically removed either some positive or some negative posts by others from their news feed over a single week.

At first Facebook’s representatives seemed quite blasé about anger over the study and saw it primarily as an issue about data privacy which it considered was well handled.

“There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely,” a Facebook spokesperson said.

In the paper, published in the Proceedings of the National Academy of Science , the authors say they had “informed consent” to carry our the research as it was consistent with Facebook’s Data Use Policy, which all users agree to when creating an account.

One of the authors has this week defended the study process, although he did apologise for any upset it caused, saying: “In hindsight, the research benefits of the paper may not have justified all of this anxiety.”

Why all the outrage?

facebook business ethics case study

So why are Facebook, the researchers and those raising concerns in academia and the news media so far apart in their opinions?

Is this just standard questionable corporate ethics in practice or is there a significant ethical issue here?

I think the source of the disagreement really is about the consent (or lack thereof) in the study and as such will disentangle what concerns about consent there are and why they matter.

There are two main things that would normally be taken as needing consent in this study:

  • accessing the data
  • manipulating the news feed.

Accessing the data

This is what the researchers and Facebook focussed on. They claimed that agreeing to Facebook’s Data Use Policy when you sign up to Facebook constitutes informed consent. Let’s examine that claim.

We use the information we receive about you […] for internal operations, including troubleshooting, data analysis, testing, research and service improvement.

It’s worth noting that this in no way constitutes informed consent since it’s unlikely that all users have read it thoroughly. While it informs you that your data may be used, it doesn’t tell you how it will be used.

But given that the data has been provided to the researchers in an appropriately anonymised format, the data is no longer personal and hence that this mere consent is probably sufficient.

It’s similar to practices in other areas such as health practice audits which are conducted with similar mere consent.

So insofar as Facebook and the researchers are focusing on data privacy, they are right. There is nothing significant to be concerned about here, barring the misdescription of the process as “informed consent”.

Manipulating the news feed

This was not a merely observational study but instead contained an intervention – manipulating the content of users’ news feed.

facebook business ethics case study

Informed consent is likewise lacking for this intervention, placing this clearly into the realm of interventional research without consent.

This is not say it is necessarily unethical, since we sometimes permit such research on the grounds that the worthwhile research aims cannot be achieved any other way.

Nonetheless there are a number of standards that research without consent is expected to meet before it can proceed:

1. Lack of consent must be necessary for the research

Could this research be done another way? It could be argued that this could have been done in a purely observational fashion by simply picking out users whose news feed were naturally more positive or negative.

Others might say that this would introduce confounding factors, reducing the validity of the study.

Let’s accept that it would have been challenging to do this any other way.

2. Must be no more than minimal risk

It’s difficult to know what risk the study posed – judging by the relatively small effect size probably little, but we have to be cautious reading this off the reported data for two reasons.

First, the data is simply what people have posted to Facebook which only indirectly measures the impact – really significant effects such as someone committing suicide wouldn’t be captured by this.

And second, we must look at this from the perspective of before the study is conducted where we don’t know the outcomes.

Still for most participants the risks were probably minimal, particularly when we take into account that their news feed may have naturally had more or less negative/positive posts in any given week.

3. Must have a likely positive balance of benefits over harms

While the harms caused directly by the study were probably minimal the sheer number of participants means on aggregate these can be quite significant.

Likewise, given the number of participants, unlikely but highly significant bad events may have occurred, such as the negative news feed being the last straw for someone’s marriage.

This will, of course, be somewhat balanced out by the positive effects of the study for participants which likewise aggregate.

What we further need to know is what other benefits the research may have been intended to have. This is unclear, though we know Facebook has an interest in improving its news feed which is presumably commercially beneficial.

We probably don’t have enough information to make a judgement about whether the benefits outweigh the risks of the research and the disrespect of subjects’ autonomy that it entails. I admit to being doubtful.

4. Debriefing & opportunity to opt out

Typically in this sort of research there ought to be a debriefing once the research is complete, explaining what has been done, why and giving the participants an option to opt out.

This clearly wasn’t done. While this is sometimes justified on the grounds of the difficulty of doing so, in this case Facebook itself would seem to have the ideal social media platform that could have facilitated this.

The rights and wrongs

So Facebook and the researchers were right to think that in regards to data access the study is ethically robust. But the academics and news media raising concerns about the study are also correct – there are significant ethical failings here regarding our norms of interventional research without consent.

Facebook claims in its Data Usage Policy that: “Your trust is important to us.”

If this is the case, they need to recognise the faults in how they conducted this study, and I’d strongly recommend that they seek advice from ethicists on how to make their approval processes more robust for future research.

  • Social media
  • Research ethics
  • Data privacy

facebook business ethics case study

Sydney Horizon Educators (Identified)

facebook business ethics case study

Senior Disability Services Advisor

facebook business ethics case study

Deputy Social Media Producer

facebook business ethics case study

Associate Professor, Occupational Therapy

facebook business ethics case study

GRAINS RESEARCH AND DEVELOPMENT CORPORATION CHAIRPERSON

facebook business ethics case study

  • Business Ethics Cases
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Business Ethics
  • Business Ethics Resources

Find ethics case studies on bribery, sourcing, intellectual property, downsizing, and other topics in business ethics, corporate governance, and ethical leadership. (For permission to reprint articles, submit requests to [email protected] .)

In this business ethics case study, Swedish multinational company IKEA faced accusations relating to child labor abuses in the rug industry in Pakistan which posed a serious challenge for the company and its supply chain management goals.

A dog may be humanity’s best friend. But that may not always be the case in the workplace.

A recent college graduate works in the finance and analytics department of a large publicly traded software company and discovers an alarming discrepancy in sales records, raising concerns about the company’s commitment to truthful reporting to investors. 

What responsibility does an employee have when information they obtained in confidence from a coworker friend may be in conflict with the needs of the company or raises legal and ethical questions.

A manager at a prominent multinational company is ethically challenged by a thin line between opportunity for economic expansion in a deeply underserved community, awareness of child labor practices, and cultural relativism.

A volunteer providing service in the Dominican Republic discovered that the non-profit he had partnered with was exchanging his donor money on the black market, prompting him to navigate a series of complex decisions with significant ethical implications.

The CFO of a family business faces difficult decisions about how to proceed when the COVID-19 pandemic changes the business revenue models, and one family shareholder wants a full buyout.

An employee at an after-school learning institution must balance a decision to accept or decline an offered gift, while considering the cultural norms of the client, upholding the best interests of all stakeholders, and following the operational rules of his employer. 

A senior vice president for a Fortune 500 savings and loan company is tasked with the crucial responsibility of representing the buyer in a multi-million dollar loan purchase deal and faces several ethical challenges from his counterpart representing the seller.

Extensive teaching note based on interviews with Theranos whistleblower Tyler Shultz. The teaching note can be used to explore issues around whistleblowing, leadership, the blocks to ethical behavior inside organizations, and board governance.

  • More pages:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Facebook’s ethical failures are not accidental; they are part of the business model

David lauer.

Urvin AI, 413 Virginia Ave, Collingswood, NJ 08107 USA

Facebook’s stated mission is “to give people the power to build community and bring the world closer together.” But a deeper look at their business model suggests that it is far more profitable to drive us apart. By creating “filter bubbles”—social media algorithms designed to increase engagement and, consequently, create echo chambers where the most inflammatory content achieves the greatest visibility—Facebook profits from the proliferation of extremism, bullying, hate speech, disinformation, conspiracy theory, and rhetorical violence. Facebook’s problem is not a technology problem. It is a business model problem. This is why solutions based in technology have failed to stem the tide of problematic content. If Facebook employed a business model focused on efficiently providing accurate information and diverse views, rather than addicting users to highly engaging content within an echo chamber, the algorithmic outcomes would be very different.

Facebook’s failure to check political extremism, [ 15 ] willful disinformation, [ 39 ] and conspiracy theory [ 43 ] has been well-publicized, especially as these unseemly elements have penetrated mainstream politics and manifested as deadly, real-world violence. So it naturally raised more than a few eyebrows when Facebook’s Chief AI Scientist Yann LeCun tweeted his concern [ 32 ] over the role of right-wing personalities in downplaying the severity of the COVID-19 pandemic. Critics were quick to point out [ 29 ] that Facebook has profited handsomely from exactly this brand of disinformation. Consistent with Facebook’s recent history on such matters, LeCun was both defiant and unconvincing.

In response to a frenzy of hostile tweets, LeCun made the following four claims:

  • Facebook does not cause polarization or so-called “filter bubbles” and that “most serious studies do not show this.”
  • Critics [ 30 ] who argue that Facebook is profiting from the spread of misinformation—are “factually wrong.” 1
  • Hate speech;
  • Calls to violence;
  • Bullying; and
  • Disinformation that endangers public safety or the integrity of the democratic process.
  • Facebook is not an “arbiter of political truth” and that having Facebook “arbitrate political truth would raise serious questions about anyone’s idea of ethics and liberal democracy.”

Absent from the claims above is acknowledgement that the company’s profitability depends substantially upon the polarization LeCun insists does not exist.

Facebook has had a profound impact on our access to ideas, information, and one another. It has unprecedented global reach, and in many markets serves as a de-facto monopolist. The influence it has over individual and global affairs is unique in human history. Mr. LeCun has been at Facebook since December 2013, first as Director of AI Research and then as Chief AI Scientist. He has played a leading role in shaping Facebook’s technology and approach. Mr. LeCun’s problematic claims demand closer examination. What follows, therefore, is a response to these claims which will clearly demonstrate that Facebook:

  • Elevates disinformation campaigns and conspiracy theories from the extremist fringes into the mainstream, fostering, among other effects, the resurgent anti-vaccination movement, broad-based questioning of basic public health measures in response to COVID-19, and the proliferation of the Big Lie of 2020—that the presidential election was stolen through voter fraud [ 16 ];
  • Empowers bullies of every size, from cyber-bullying in schools, to dictators who use the platform to spread disinformation, censor their critics, perpetuate violence, and instigate genocide;
  • Defrauds both advertisers and newsrooms, systematically and globally, with falsified video engagement and user activity statistics;
  • Reflects an apparent political agenda espoused by a small core of corporate leaders, who actively impede or overrule the adoption of good governance;
  • Brandishes its monopolistic power to preserve a social media landscape absent meaningful regulatory oversight, privacy protections, safety measures, or corporate citizenship; and
  • Disrupts intellectual and civil discourse, at scale and by design.

I deleted my Facebook account

I deleted my account years ago for the reasons noted above, and a number of far more personal reasons. So when LeCun reached out to me, demanding evidence for my claims regarding Facebook’s improprieties, it was via Twitter. What proof did I have that Facebook creates filter bubbles that drive polarization?

In anticipation of my response, he offered the claims highlighted above. As evidence of his claims, he directed my attention to a single research paper [ 23 ] that, on closer inspection, does not appear at all to reinforce his case.

The entire exchange also suggests that senior leadership at Facebook still suffers from a massive blindspot regarding the harm that its platform causes—that they continue to “move fast and break things” without regard for the global impact of their behavior.

LeCun’s comments confirm the concerns that many of us have held for a long time: Facebook has declined to resolve its systemic problems, choosing instead to paper over these deep philosophical flaws with advanced, though insufficient, technological solutions. Even when Facebook takes occasion to announce its triumphs in the ethical use of AI, such as its excellent work [ 8 ] detecting suicidal tendencies, its advancements pale in comparison to the inherent problems written into its algorithms.

This is because, fundamentally, their problem is not a failure of technology, nor a shortcoming in their AI filters. Facebook’s problem is its business model. Facebook makes superficial technology changes, but at its core, profits chiefly from engagement and virality. Study after study has found that “lies spread faster than the truth,” [ 47 ] “conspiracy theories spread through a more decentralized network,” [ 41 ] and that “politically extreme sources tend to generate more interactions from users.” 2 Facebook knows that the most efficient way to maximize profitability is to build algorithms that create filter bubbles and spread viral misinformation.

This is not a fringe belief or controversial opinion. This is a reality acknowledged even by those who have lived inside of Facebook’s leadership structure. As the former director of monetization for Facebook, Tim Kendall explained in his Congressional testimony, “social media services that I, and others have built, have torn people apart with alarming speed and intensity. At the very least we have eroded our collective understanding—at worst, I fear we are pushing ourselves to the brink of a civil war.” [ 38 ]

Facebook’s black box

To effectively study behavior on Facebook, we must be able to study Facebook’s algorithms and AI models. Therein lies the first problem. The data and transparency to do so are simply not there. Facebook does not practice transparency—they do not make comprehensive data available on their recommendation and filtering algorithms, or their other implementations of AI. One organization attempting to study the spread of misinformation, NYU’s Cybersecurity for Democracy, explains, “[o]ur findings are limited by the lack of data provided by Facebook…. Without greater transparency and access to data, such research questions are out of reach.” 3

Facebook’s algorithms and AI models are proprietary, and they are intentionally hidden from us. While this is normal for many companies, no other company has 2.85 billion monthly active users. Any platform that touches so many lives must be studied so that we can truly understand its impact. Yet Facebook does not make the kind of data available that is needed for robust study of the platform.

Facebook would likely counter this, and point to their partnership with Harvard’s Institute for Quantitative Social Science (Social Science One) as evidence that they are making data available to researchers [ 19 ]. While this partnership is one step in the right direction, there are several problems with this model:

  • The data are extremely limited. At the moment it consists solely of web page addresses that have been shared on Facebook for 18 months from 2017 to 2019.
  • Researchers have to apply for access to the data through Social Science One, which acts as a gatekeeper of the data.
  • If approved, researchers have to execute an agreement directly with Facebook.

This is not an open, scientific process. It is, rather, a process that empowers administrators to cherry-pick research projects that favor their perspective. If Facebook was serious about facilitating academic research, they would provide far greater access to, availability of, and insight into the data. There are legitimate privacy concerns around releasing data, but there are far better ways to address those concerns while fostering open, vibrant research.

Does Facebook cause polarization?

LeCun cited a single study as evidence that Facebook does not cause polarization. But do the findings of this study support Mr. LeCun’s claims?

The study concludes that “polarization has increased the most among the demographic groups least likely to use the Internet and social media.” The study does not, however, actually measure this type of polarization directly. Its primary data-gathering instrument—a survey on polarization—did not ask whether respondents were on the Internet or if they used social media. Instead, the study estimates whether an individual respondent is likely to be on the Internet based on an index of demographic factors which suggest “predicted” Internet use. As explained in the study, “the main predictor [they] focus on is age” [ 23 ]. Age is estimated to be negatively correlated with social media usage. Therefore, since older people are also shown to be more politically polarized, LeCun takes this as evidence that social media use does not cause polarization.

This assumption of causality is flawed. The study does not point to a causal relationship between these demographic factors and social media use. It simply says that these demographic factors drive polarization. Whether these factors have a correlational or causative relationship with the Internet and social media use is complete conjecture. The author of the study himself caveats any such conclusions, noting that “[t]hese findings do not rule out any effect of the internet or social media on political polarization.” [ 5 ].

Not only is LeCun’s assumption flawed, it is directly refuted by a recent Pew Research study [ 3 ] that found an overwhelmingly high percentage of US adults age 65 + are on Facebook (50%), the most of any social network. If anything, older age is actually more clearly correlated with Facebook use relative to other social networks.

Moreover, in 2020, the MIS Quarterly journal published a study by Steven L. Johnson, et al. that explored this problem and found that the “more time someone spends on Facebook, the more polarized their online news consumption becomes. This evidence suggests Facebook indeed serves as an echo chamber especially for its conservative users” [ 24 ].

Allcott, et al. also explores this question in “The Welfare Effects of Social Media” in November, 2019, beginning with a review of other studies confirming a relationship between social media use, well-being and political polarization [ 1 ]:

More recent discussion has focused on an array of possible negative impacts. At the individual level, many have pointed to negative correlations between intensive social media use and both subjective well-being and mental health. Adverse outcomes such as suicide and depression appear to have risen sharply over the same period that the use of smartphones and social media has expanded. Alter (2018) and Newport (2019), along with other academics and prominent Silicon Valley executives in the “time well-spent” movement, argue that digital media devices and social media apps are harmful and addictive. At the broader social level, concern has focused particularly on a range of negative political externalities. Social media may create ideological “echo chambers” among like-minded friend groups, thereby increasing political polarization (Sunstein 2001, 2017; Settle 2018). Furthermore, social media are the primary channel through which misinformation spreads online (Allcott and Gentzkow 2017), and there is concern that coordinated disinformation campaigns can affect elections in the US and abroad.

Allcott’s 2019 study uses a randomized experiment in the run-up to the November 2018 midterm elections to examine how Facebook affects several individual and social welfare measures. They found that:

deactivating Facebook for the four weeks before the 2018 US midterm election (1) reduced online activity, while increasing offline activities such as watching TV alone and socializing with family and friends; (2) reduced both factual news knowledge and political polarization; (3) increased subjective well-being; and (4) caused a large persistent reduction in post-experiment Facebook use.

In other words, not using Facebook for a month made you happier and resulted in less future usage. In fact, they say that “deactivation significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.” None of these findings would come as a surprise to anybody who works at Facebook.

“A former Facebook AI researcher” confirmed that they ran “‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization” [ 21 ]. Not only did Facebook know this, but they continued to design and build their recommendation algorithms to maximize user engagement, knowing that this meant optimizing for extremism and polarization. 4

Facebook understood what they were building according to Tim Kendall’s Congressional testimony in 2020. He explained that “we sought to mine as much attention as humanly possible and turn [sic] into historically unprecedented profits” [ 38 ]. He went on to explain that their inspiration was “Big Tobacco’s playbook … to make our offering addictive at the outset.” They quickly figured out that “extreme, incendiary content” directly translated into “unprecedented engagement—and profits.” He was the director of monetization for Facebook—few would have been better positioned to understand Facebook’s motivations, findings and strategy.

Engagement, filter bubbles, and executive compensation

The term “filter bubble” was coined by Eli Pariser who wrote a book with that title, exploring how social media algorithms are designed to increase engagement and create echo chambers where inflammatory posts are more likely to go viral. Filter bubbles are not just an algorithmic outcome; often we filter our own lives, surrounding ourselves with friends (online and offline) who are more likely to agree with our philosophical, religious and political views.

Social media platforms capitalize on our natural tendency toward filtered engagement. These platforms build algorithms, and structure executive compensation, [ 27 ] to maximize such engagement. By their very design, social media curation and recommendation algorithms are engineered to maximize engagement, and thus, are predisposed to create filter bubbles.

Facebook has long attracted criticism for its pursuit of growth at all costs. A recent profile of Facebook’s AI efforts details the difficulty of getting “buy-in or financial support when the work did not directly improve Facebook’s growth.” [ 21 ]. Andrew Bosworth, a Vice President at Facebook said in a 2016 memo that nothing matters but growth, and that “all the work we do in growth is justified” regardless of whether “it costs someone a life by exposing someone to bullies” or if “somebody dies in a terrorist attack coordinated on our tools” [ 31 ].

Bosworth and Zuckerberg went on to claim [ 36 ] that the shocking memo was merely an attempt at being provocative. Certainly, it succeeded in this aim. But what else could they really say? It’s not a great look. And it looks even worse when you consider that Facebook’s top brass really do get paid more when these things happen. The above-referenced report is based on interviews with multiple former product managers at Facebook, and shows that their executive compensation system is largely based around their most important metric–user engagement. This creates a perverse incentive. And clearly, by their own admission, Facebook will not allow a few casualties to get in the way of their executive compensation.

Is it incidental or intentional?

Yaël Eisenstat, a former CIA analyst who specialized in counter-extremism went on to work at Facebook out of concern that the social media platform was increasing radicalization and political polarization. She explained in a TED talk [ 13 ] that the current information ecosystem is manipulating its users, and that “social media companies like Facebook profit off of segmenting us and feeding us personalized content that both validates and exploits our biases. Their bottom line depends on provoking a strong emotion to keep us engaged, often incentivizing the most inflammatory and polarizing voices.” This emotional response results in more than just engagement—it results in addiction.

Eisenstat joined Facebook in 2018 and began to explore the issues which were most divisive on the social media platform. She began asking questions internally about what was causing this divisiveness. She found that “the largest social media companies are antithetical to the concept of reasoned discourse … Lies are more engaging online than truth, and salaciousness beats out wonky, fact-based reasoning in a world optimized for frictionless virality. As long as algorithms’ goals are to keep us engaged, they will continue to feed us the poison that plays to our worst instincts and human weaknesses.”

She equated Facebook’s algorithmic manipulation to the tactics that terrorist recruiters use on vulnerable youth. She offered Facebook a plan to combat political disinformation and voter suppression. She has claimed that the plan was rejected, and Eisenstat left after just six months.

As noted earlier, LeCun flatly denies [ 34 ] that Facebook creates filter bubbles that drive polarization. In sharp contrast, Eisenstat explains that such an outcome is a feature of their algorithm, not a bug. The Wall St. Journal reported that in 2018, senior executives at Facebook were informed of the following conclusions during an internal presentation [ 22 ]:

  • “Our algorithms exploit the human brain’s attraction to divisiveness… [and] if left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention and increase time on the platform.”
  • The platform aggravates polarization and tribal behavior.
  • Some proposed algorithmic changes would “disproportionately affect[] conservative users and publishers.”
  • Looking at data for Germany, an internal report found “64% of all extremist group joins are due to our recommendation tools … Our recommendation systems grow the problem.”

These are Facebook’s own words, and arguably, they provide the social media platform with an invaluable set of marketing prerogatives. They are reinforced by Tim Kendall’s testimony as discussed above.

“Most notably,” reported the WSJ, “the project forced Facebook to consider how it prioritized ‘user engagement’—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.” As noted in the section above, executive compensation was tied to “user engagement,” which meant product developers at Facebook were incentivized to design systems in this very way. 5

Mark Zuckerberg and Joel Kaplan reportedly [ 22 ] dismissed the conclusions from the 2018 presentation, calling efforts to bring greater civility to conversations on the social media platform “paternalistic.” Zuckerberg went on to say that he would “stand up against those who say that new types of communities forming on social media are dividing us.” Kaplan reportedly “killed efforts to build a classification system for hyperpolarized content.” Failing to address this has resulted in algorithms that, as Tim Kendall explained, “have brought out the worst in us. They have literally rewired our brains so that we are detached from reality and immersed in tribalism” [ 38 ].

Facebook would have us believe that it has made great strides in confronting these problems over just the last two years, as Mr. LeCun has claimed. But at present, the burden of proof is on Facebook to produce the full, raw data so that independent researchers can make a fair assessment of his claims.

The AI filter

According to LeCun’s tweets cited at the beginning of this paper, Facebook’s AI-powered filter cleanses the platform of:

  • Disinformation that endangers public safety or the integrity of the democratic process

These are his words, so we will refer to them even while the actual definitions of hate speech, calls to violence, and other terms are potentially controversial and open to debate.

These claims are provably false. While “AI” (along with some very large, manual curation operations in developing countries) may effectively filter some of this content, at Facebook’s scale, some is not enough.

Let’s examine the claims a little closer.

Does Facebook actually filter out hate speech?

An investigation by the UK-based counter-extremist organization ISD (Institute for Strategic Dialog) found that Facebook’s algorithm “actively promotes” Holocaust denial content [ 20 ]. The same organization, in another report, documents how Facebook’s “delays or mistakes in policy enforcement continue to enable hateful and harmful content to spread through paid targeted ads.” [ 17 ]. They go on to explain that “[e]ven when action is taken on violating ad content, such a response is often reactive and delayed, after hundreds, thousands, or potentially even millions of users have already been served those ads on their feeds.” 6

Zuckerberg admitted in April 2018 that hate speech in Myanmar was a problem, and pledged to act. Four months later, Reuters found more than “1000 examples of posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims that were on Facebook” [ 45 ]. As recently as June 2020 there were reports [ 7 ] of troll farms using Facebook to intimidate opponents of Rodrigo Duterte in the Philippines with death threats and hateful comments.

Does Facebook actually filter out calls to violence?

The Sri Lankan government had to block access to Facebook “amid a wave of violence against Muslims … after Facebook ignored years of calls from both the government and civil society groups to control ethnonationalist accounts that spread hate speech and incited violence.” [ 42 ] A report from the Center for Policy Alternatives in September 2014 detailed evidence of 20 hate groups in Sri Lanka, and informed Facebook. In March of 2018, Buzzfeed reported that “16 out of the 20 groups were still on Facebook”. 7

When former President Trump tweeted, in response to Black Lives Matters protests, when “the looting starts, the shooting starts,” the message was liked and shared hundreds of thousands of times across Facebook and Instagram, even as other social networks such as Twitter flagged the message for its explicit incitement of violence [ 48 ] and prevented it from being retweeted.

Facebook played a pivotal role in the planning of the January 6th insurrection in the US, providing an unchecked platform for proliferation of the Big Lie, radicalization around this lie, and coordinated organization around explicitly-stated plans to engage in violent confrontation at the nation’s capital on the outgoing president’s behalf. Facebook’s role in the deadly violence was far greater and more widespread than the role of Parler and the other fringe right-wing platforms that attracted so much attention in the aftermath of the attack [ 11 ].

Does Facebook actually filter out cyberbullying?

According to Enough Is Enough, a non-partisan, non-profit organization whose mission is “making the Internet safer for children and families,” the answer is a resounding no. According to their most recent cyberbullying statistics, [ 10 ] 47% of young people have been bullied online, and the two most prevalent platforms are Instagram at 42% and Facebook at 37%.

In fact, Facebook is failing to protect children on a global scale. According to a UNICEF poll of children in 30 countries, one in every three young people says that they have been victimized by cyberbullying. And one in five says the harassment and threat of actual violence caused them to skip school. According to the survey, conducted in concert with the UN Special Representative of the Secretary-General (SRSG) on Violence against Children, “almost three-quarters of young people also said social networks, including Facebook, Instagram, Snapchat and Twitter, are the most common place for online bullying” [ 49 ].

Does Facebook actually filter out “disinformation that endangers public safety or the integrity of the democratic process?”

To list the evidence contradicting this point would be exhausting. Below are just a few examples:

  • The Computational Propaganda Research Project found in their 2019 Global Inventory of Organized Social Media Manipulation that 70 countries had disinformation campaigns organized on social media in 2019, with Facebook as the top platform [ 6 ].
  • A Facebook whistleblower produced a 6600 word memo detailing case after case of Facebook “abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe.” [ 44 ]
  • Facebook is ground-zero for anti-vaccination and pandemic misinformation, with the 26-min conspiracy theory film “Plandemic” going viral on Facebook in April 2020 and garnering tens of millions of views. Facebook’s attempt to purge itself of anti-vaccination disinformation was easily thwarted when the groups guilty of proliferating this content removed the word “vaccine” from their names. In addition to undermining public health interests by spreading provably false content, these anti-vaccination groups have obscured meaningful discourse about the actual health concerns and risks that may or may not be connected to vaccinations. A paper from May 2020 attempts to map out the “multi-sided landscape of unprecedented intricacy that involves nearly 100 million individuals” [ 25 ] that are entangled with anti-vaccination clusters. That report predicts that such anti-vaccination views “will dominate in a decade” given their explosive growth and intertwining with undecided people. According to the Knight Foundation and Gallup, [ 26 ] 75% of Americans believe they “were exposed to misinformation about the election” on Facebook during the 2020 US presidential election. This is one of those rare issues on which Republicans (76%), Democrats (75%) and Independents (75%) agree–Facebook was the primary source for election misinformation.

If those AI filters are in fact working, they are not working very well.

All of this said, Facebook’s reliance on “AI filters” misses a critical point, which is that you cannot have AI ethics without ethics [ 30 ]. These problems cannot be solved with AI. These problems cannot be solved with checklists, incremental advances, marginal changes, or even state-of-the-art deep learning networks. These problems are caused by the company’s entire business model and mission. Bosworth’s provocative quotes above, along with Tim Kendall’s direct testimony demonstrate as much.

These are systemic issues, not technological ones. Yael Eisenstat put it best in her TED talk: “as long as the company continues to merely tinker around the margins of content policy and moderation, as opposed to considering how the entire machine is designed and monetized, they will never truly address how the platform is contributing to hatred, division and radicalization.”

Facebook does not want to be the arbiter of truth

We should probably take comfort in Facebook’s claim that it does not wish to be the “arbiter of political truth.” After all, Facebook has a troubled history with the truth. Their ad buying customers proved as much when Facebook was forced to pay $40 million to settle a lawsuit alleging that they had inflated “by up to 900 percent—the time it said users spent watching videos.” [ 4 ] While Facebook would neither admit nor deny the truth of this allegation, they did admit to the error in a 2016 statement [ 14 ].

This was not some innocuous lie that just cost a few firms some money either. As Slate explained in a 2018 article, “many [publications] laid off writers and editors and cut back on text stories to focus on producing short, snappy videos for people to watch in their Facebook feeds.” [ 40 ] People lost their livelihoods to this deception.

Is this an isolated incident? Or is fraud at Facebook systemic? Matt Stoller describes the contents of recently unsealed legal documents [ 12 ] in a lawsuit alleging Facebook has defrauded advertisers for years [ 46 ]:

The documents revealed that Facebook COO Sheryl Sandberg directly oversaw the alleged fraud for years. The scheme was simple. Facebook deceived advertisers by pretending that fake accounts represented real people, because ad buyers choose to spend on ad campaigns based on where they think their customers are. Former employees noted that the corporation did not care about the accuracy of numbers as long as the ad money was coming in. Facebook, they said, “did not give a shit.” The inflated statistics sometimes led to outlandish results. For instance, Facebook told advertisers that its services had a potential reach of 100 million 18–34-year-olds in the United States, even though there are only 76 million people in that demographic. After employees proposed a fix to make the numbers honest, the corporation rejected the idea, noting that the “revenue impact” for Facebook would be “significant.” One Facebook employee wrote, “My question lately is: how long can we get away with the reach overestimation?” According to these documents, Sandberg aggressively managed public communications over how to talk to advertisers about the inflated statistics, and Facebook is now fighting against her being interviewed by lawyers in a class action lawsuit alleging fraud.

Facebook’s embrace of deception extends from its ad-buying fraud to the content on its platforms. For instance:

  • Those who would “aid[] and abet[] the spread of climate misinformation” on Facebook benefit from “a giant loophole in its fact-checking program.” Evidently, Facebook gives its staff the power to overrule climate scientists by deeming climate disinformation “opinion.” [ 2 ].
  • The former managing editor of Snopes reported that Facebook was merely using the well-regarded fact-checking site for “crisis PR,” that they did not take fact checking seriously and would ignore concerns [ 35 ]. Snopes tried hard to push against the Myanmar disinformation campaign, amongst many other issues, but its concerns were ignored.
  • ProPublica recently reported [ 18 ] that Sheryl Sandberg silenced and censored a Kurdish militia group that “the Turkish government had targeted” in order to safeguard their revenue from Turkey.
  • Mark Zuckerberg and Joel Kaplan intervened [ 37 ] in April 2019 to keep Alex Jones on the platform, despite the right-wing conspiracy theorist’s lead role in spreading disinformation about the 2012 Sandy Hook elementary school shooting and the 2018 Parkland high school shooting.

Arguably, Facebook’s executive team has not only ceded responsibility as an “arbiter of truth,” but has also on several notable occasions, intervened to ensure the continued proliferation of disinformation.

How do we disengage?

Facebook’s business model is focused entirely on increasing growth and user engagement. Its algorithms are extremely effective at doing so. The steps Facebook has taken, such as building “AI filters” or partnering with independent fact checkers, are superficial and toothless. They cannot begin to untangle the systemic issues at the heart of this matter, because these issues are Facebook’s entire reason for being.

So what can be done? Certainly, criminality needs to be prosecuted. Executives should go to jail for fraud. Social media companies, and their organizational leaders, should face legal liability for the impact made by the content on their platforms. One effort to impose legal liability in the US is centered around reforming section 230 of the US Communications Decency Act. It, and similar laws around the world, should be reformed to create far more meaningful accountability and liability for the promotion of disinformation, violence, and extremism.

Most importantly, monopolies should be busted. Existing antitrust laws should be used to break up Facebook and restrict its future activities and acquisitions.

The matters outlined here have been brought to the attention of Facebook’s leadership in countless ways that are well documented and readily provable. But the changes required go well beyond effective leveraging of AI. At its heart, Facebook will not change because they do not want to, and are not incentivized to. Facebook must be regulated, and Facebook’s leadership structure must be dismantled.

It seems unlikely that politicians and regulators have the political will to do all of this, but there are some encouraging signs, especially regarding antitrust investigations [ 9 ] and lawsuits [ 28 ] in both the US and Europe. Still, this issue goes well beyond mere enforcement. Somehow we must shift the incentives for social media companies, who compete for, and monetize, our attention. Until we stop rewarding Facebook’s illicit behavior with engagement, it’s hard to see a way out of our current condition. These companies are building technology that is designed to draw us in with problematic content, addict us to outrage, and ultimately drive us apart. We no longer agree on shared facts or truths, a condition that is turning political adversaries into bitter enemies, that is transforming ideological difference into seething contempt. Rather than help us lead more fulfilling lives or find truth, Facebook is helping us to discover enemies among our fellow citizens, and bombarding us with reasons to hate them, all to the end of profitability. This path is unsustainable.

The only thing Facebook truly understands is money, and all of their money comes from engagement. If we disengage, they lose money. If we delete, they lose power. If we decline to be a part of their ecosystem, perhaps we can collectively return to a shared reality.

1 Facebook executives have, themselves, acknowledged that Facebook profits from the spread of misinformation: https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news .

2 Cybersecurity for Democracy. (March 3, 2021). “Far-right news sources on Facebook more engaging.” https://medium.com/cybersecurity-for-democracy/far-right-news-sources-on-facebook-more-engaging-e04a01efae90 .

5 Facebook claims to have since broadened the metrics it uses to calculate executive pay, but to what extent this might offset the prime directive of maximizing user engagement is unclear.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Study Guides
  • Homework Questions

BUS 5115 Written Assignment Unit 7

A popular YouTuber's negative video of Humane's AI Pin raises questions about critical reviews in the age of innovation

  • This post originally appeared in the Insider Today newsletter.
  • You can sign up for Business Insider's daily newsletter here .

Insider Today

Hello there! If you're struggling to decide the foods worth buying organic, best-selling author Michael Pollan has some suggestions for the ones worth splurging on to avoid harmful chemicals .  

In today's big story, we're looking at a critical tech review that caused a bit of a stir on social media .

What's on deck:

Markets: Goldman Sachs quiets the haters with a monster earnings report .

Tech: Leaked docs show one of Prime Video's biggest issues, forcing customers to abandon shows .

Business: The best bet in business these days? Targeting young men who like to gamble .

But first, the review is in!

If this was forwarded to you, sign up here.

The big story

Up for review.

"The Worst Product I've Ever Reviewed… For Now"

Marques Brownlee, the YouTuber better known as MKBHD, didn't mince words with the title of his review of Humane's AI Pin .

In a 25-minute video , Brownlee details all the issues he encountered using the AI device. (Spoiler alert: There were a lot.)

Brownlee's review aligns with other criticisms of the device . But not all of those came from someone with as much sway. His YouTube channel has more than 18 million subscribers.

One user on X pointed that out , calling the review "almost unethical" for "potentially killing someone else's nascent project" in a post reposted over 2,000 times. 

Most of the internet disagreed, and a Humane exec even thanked Brownlee on X for the "fair and valid critiques." 

But it highlights the power of Brownlee's reviews. Earlier this year, a negative video of Fisker's Ocean SUV by Brownlee also made waves on social media . 

Critical reviews in the age of innovation raise some interesting questions.

To be clear, there was nothing wrong with Brownlee's review. Humane's AI Pin costs $700. Watering down his review to ease the blow would be a disservice to the millions of fans relying on his perspective before making such a significant purchase.

Too often, companies view potential customers as an extension of their research and development. They are happy to sell a product that is still a work in progress on the promise they'll fix it on the fly. ("Updates are coming!")

But in a world of instant gratification, it can be hard to appreciate that innovation takes time. 

Even Apple can run into this conundrum. Take the Apple Vision Pro. Reviewers are impressed with the technology behind the much-anticipated gadget — but are still struggling to figure out what they can do with it . Maybe, over time, that will get sorted out. It's also worth remembering how cool tech can be, as Business Insider's Peter Kafka wrote following a bunch of trips in Waymo's software-powered taxis in San Francisco . Sure, robotaxis have their issues, Peter said, but they also elicit that "golly-gee-can-you-believe-it" sense.

As for Humane, America loves a comeback story. Just look at "Cyberpunk 2077." The highly anticipated video game had a disastrous launch in 2020 , but redeemed itself three years later, ultimately winning a major award .

Still, Humane shouldn't get a pass for releasing a product that didn't seem ready for primetime, according to the reviews. 

And its issue could be bigger than glitchy tech. Humane's broader thesis about reducing screen time might not be as applicable. As BI's Katie Notopolous put it: " I love staring at my iPhone ."

3 things in markets

1. Goldman finally strikes gold. After a rough stretch, the vaunted investment bank crushed earnings expectations , sending its stock soaring. A big tailwind, according to CEO David Solomon, is AI spawning " enormous opportunities " for the bank. 

2. Buy the dip, Wedbush says. Last week's drop among tech stocks shouldn't scare away investors , according to Wedbush. A strong earnings report, buoyed by the ongoing AI craze, should keep them soaring, strategists said. But JPMorgan doesn't see it that way, saying prices are already stretched .   

3. China's economy beat analysts' expectations. The country's GDP grew 5.3% in the first quarter of 2024, according to data published by the National Bureau of Statistics on Tuesday. It's a welcome return to form for the world's second-largest economy, although below-par new home and retail sales remain a cause for concern .

3 things in tech

1. Amazon Prime Video viewers are giving up on its shows. Leaked documents show viewers are fed up with the streamer's error-ridden catalog system , which often has incomplete titles and missing episodes. In 2021, 60% of all content-related complaints were about Prime Video's catalog.

2. Eric Newcomer is bringing his Cerebral Valley AI Summit to New York. The conference, originally held in San Francisco, is famous for producing one of the largest generative AI acquisitions ever. Now, it's coming to New York in June .

3. OpenAI is plotting an expansion to NYC. Two people familiar with the plans told BI that the ChatGPT developer is looking to open a New York office next year. That would be the company's fifth office, alongside its current headquarters in San Francisco, a just-opened site in Tokyo, and spots in London and Dublin.

3 things in business

1. America's young men are spending their money like never before. From sports betting to meme coins, young men are more willing than ever to blow money in the hopes of making a fortune .

2. Investors are getting into women's sports. With women like Caitlin Clark dominating March Madness headlines, investors see a big opportunity. BI compiled a list of 13 investors and fund managers pouring money into the next big thing in sports.

3. Bad news for Live Nation. The Wall Street Journal reports that the Justice Department could hit the concert giant with an antitrust lawsuit as soon as next month. Live Nation, which owns Ticketmaster, has long faced criticism over its high fees.

In other news

Blackstone hires Walmart AI whiz to supercharge its portfolio companies .

Taylor Swift, Rihanna, Blackpink's Lisa: Celebrities spotted at Coachella 2024 . 

NYC's rat czar says stop feeding the pigeons if you want the vermin gone .

A major Tesla executive left after 18 years at the company amid mass layoffs .

Some Tesla factory workers realized they were laid off when security scanned their badges and sent them back on shuttles, sources say .

New York is in, San Francisco is very much out for tech workers relocating .

AI could split workers into 2: The ones whose jobs get better and the ones who lose them completely .

Oh look at that! Now Google is using AI to answer search queries .

A longtime banker gives a rare inside look at how he is thinking about his next career move, from compensation to WFH .

Clarence Thomas didn't show up for work today .

What's happening today

Today's earnings: United Airlines, Bank of America, Morgan Stanley, and others are reporting . 

It's Free Cone Day at participating Ben & Jerry's stores. 

The Insider Today team: Dan DeFrancesco , deputy editor and anchor, in New York. Jordan Parker Erb , editor, in New York. Hallam Bullock , senior editor, in London. George Glover , reporter, in London.

Watch: Nearly 50,000 tech workers have been laid off — but there's a hack to avoid layoffs

facebook business ethics case study

  • Main content

COMMENTS

  1. Facebook's ethical failures are not accidental; they are part of the

    In response to a frenzy of hostile tweets, LeCun made the following four claims: 1. Facebook does not cause polarization or so-called "filter bubbles" and that "most serious studies do not show this.". 2. Critics [ 30] who argue that Facebook is profiting from the spread of misinformation—are "factually wrong.". 3.

  2. An Ethics Perspective On Facebook

    So, recently I engaged with my friend Michael Thate, who teaches Ethics at Princeton University, to get his take on this — particularly as he advises business leaders. I asked him what he sees ...

  3. Facebook, bad news and business ethics: Consider the consequences of

    The case of Facebook shines a light on how the pursuit of a business KPI like user engagement can create a raft of unintended ethical consequences. When you come across these unintended ...

  4. WashU Experts: Facebook controversy raises ethical questions for

    According to Wall, business leaders should be thinking hard about how their firms — many of whom have become dependent on platforms such as Facebook for business growth — will use customer data responsibly. "Certainly, the amount of users on these platforms is appealing in that it enables marketers the ability to reach a lot of consumers.

  5. Facebook: A Case Study in Ethics

    This week, as in many weeks in 2018, the case study comes from Facebook. Facebook collects a lot of data. It has an impressive social graph for its members. It can analyze communication patterns ...

  6. Harvard psychologist explains Facebook's moral quandary

    Testimony by former Facebook employee Frances Haugen, who holds a degree from Harvard Business School, and a series in the Wall Street Journal have left many, including Joshua Greene, Harvard professor of psychology, calling for stricter regulation of the social media company.Greene, who studies moral judgment and decision-making and is the author of "Moral Tribes: Emotion, Reason, and the ...

  7. Facebook: Hard Questions (A)

    The case can be used for two purposes. In classes on business ethics, the case can be used to analyze role of moral intuitions in determining how stakeholders respond to a company's policies. It also can be used to highlight the fact that a company's executives and employees often fail to anticipate these stakeholder reactions.

  8. Facebook—Can Ethics Scale in the Digital Age?

    Abstract. Since its founding in 2004, Facebook has built a phenomenally successful business at global scale to become the fifth most valuable public company in the world. The revelation of Cambridge Analytica events in March 2018, where 78 million users' information was leaked in a 2016 U.S. election cycle, exposed a breach of trust/privacy ...

  9. A Whistleblower Faces Down Facebook

    Whistleblower Frances Haugen's October 5, 2021 testimony before Congress regarding her former employer Facebook's practices was simultaneously riveting and deeply unsettling. Her overarching point was that Facebook consistently prioritizes profits over users' safety, refusing to make product reforms that would protect users from the company's products' biggest harms. Facebook has, of ...

  10. Social media ethics in the data economy: Issues of social

    While Boatwright and White (2020) provided an analysis of the business model of Facebook, Inc. and the company's deceptive communication practices, the focus of that study was the need for government oversight and regulation of the policy aspects of social media privacy. The current study is a critical essay that addresses ethical ...

  11. Facebook-Can Ethics Scale in the Digital Age?

    Since its founding in 2004, Facebook has built a phenomenally successful business at global scale to become the fifth most valuable public company in the world. The revelation of Cambridge Analytica events in March 2018, where 78 million users' information was leaked in a 2016 U.S. election cycle, exposed a breach of trust/privacy among its user community. In the past, growth at any costs ...

  12. Facebook takes its ethics into the metaverse

    UNSW Business School. +61407701034. [email protected]. In September 2021, The Wall Street Journal published a series of damning articles on Facebook. Based on internal documents, several ethically questionable practices within the technology company were highlighted. Later revealed to have been leaked by whistleblower Frances Haugen, a ...

  13. Facebook and our Fake News Problem

    Zuckerberg has since begun to address the fake news issue, but warns, "We must proceed very carefully… and must be extremely cautious about becoming arbiters of truth ourselves.". The Society of Professional Journalists has a code of ethics with four principles: seek truth and report it; minimize harm; act independently; and be ...

  14. What if Facebook goes down? Ethical and legal considerations for the

    But what happens to our data when these companies close or fail? Despite the high stakes involved, this topic has received only limited attention to date. In this article, we use the hypothetical failure of Facebook as a case study to analyse legal and ethical risks related to the closure of data-rich, Big Tech platforms.

  15. Facebook emotion study breached ethical guidelines, researchers say

    Last modified on Wed 22 Feb 2017 13.31 EST. Researchers have roundly condemned Facebook's experiment in which it manipulated nearly 700,000 users' news feeds to see whether it would affect their ...

  16. Consent and ethics in Facebook's emotional manipulation study

    Flickr/Steven Mileham, CC BY-NC. Significant concerns are raised about the ethics of research carried out by Facebook after it revealed how it manipulated the news feed of thousands of users. In ...

  17. Informed consent and the Facebook emotional manipulation study

    The Facebook study, entitled Experimental evidence of massive-scale emotional contagion through social networks (Kramer et al., 2014), was a collaborative endeavour between Facebook and Cornell University's Departments of Communication and Information Science.In it, Facebook researchers directly manipulated Facebook users' news feeds to display differing amounts of positive and negative ...

  18. Facebook's Data Privacy Controversies|Business Ethics|Case Study|Case

    The case examines the data privacy controversies related to Facebook.com (Facebook), the social networking website, operated by the US based Facebook Inc. Launched in 2004, as a social networking website for the students of Harvard University, Facebook went on to become the largest social networking website in terms of number of users. However, since the year 2006, different features and ...

  19. Business Ethics Cases

    A Business Ethics Case Study. An employee at an after-school learning institution must balance a decision to accept or decline an offered gift, while considering the cultural norms of the client, upholding the best interests of all stakeholders, and following the operational rules of his employer.

  20. Facebook-Cambridge Analytica Data Scandal|Business Ethics|Case Study

    Issues. The case is structured to achieve the following teaching objectives: Analyze the ethical issues arising out of the Facebook data breach scandal.. Understand the role of security in social networking. Study the impact of the data scandal on Facebook. Identify the various challenges Facebook is likely to face post the data scandal.

  21. Facebook Case Study

    Facebook Case Study, Media Ethics 2012. SITUATION ... Essentially, we decided that although we might be frustrated with Facebook's business tactics, it is important to remember that Facebook is a privilege, not a right. We would even claim that one can live a fulfilled, or even a more productive life without Facebook. ...

  22. Meta for Business: Case Studies

    Accomplish your goals Build awareness Reach new customers Increase sales Monetize your content Start with business tools Facebook Page Meta Business Suite Shops Ads Manager Explore apps Facebook Instagram Messenger WhatsApp. ... Read marketing case studies and success stories relevant to your business. Read. Case Study Alfamart: Building ...

  23. Facebook's ethical failures are not accidental; they are part of the

    In response to a frenzy of hostile tweets, LeCun made the following four claims: Facebook does not cause polarization or so-called "filter bubbles" and that "most serious studies do not show this.". Critics [ 30] who argue that Facebook is profiting from the spread of misinformation—are "factually wrong." 1.

  24. BUS 5115 Written Assignment Unit 7 (docx)

    Business document from University of the People, 5 pages, Written Assignment Unit 7 University of the People BUS 5115 BUSINESS LAW, ETHICS, AND SOCIAL RESPONSIBILITY Introduction: The case study of "A Good Team Player" presents an ethical dilemma faced by Steven, an Assistant Department Manager, when confronted

  25. MKBHD Review of Humane AI Is a Case Study of ...

    In a 25-minute video, Brownlee details all the issues he encountered using the AI device.(Spoiler alert: There were a lot.) Brownlee's review aligns with other criticisms of the device.But not all ...