What does freedom of speech mean in the internet era?

freedom of speech on the internet is an example of

More than two-thirds of the world is using the internet, a lot. Image:  REUTERS/Fred Prouser

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} John Letzing

freedom of speech on the internet is an example of

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Media, Entertainment and Sport is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, media, entertainment and sport.

  • The US Supreme Court is weighing in on whether social media sites can be compelled to include all viewpoints no matter how objectionable.
  • The high court’s decision could have a broad impact on how the internet is experienced.
  • Its deliberations raise questions about regulation, freedom of speech and what makes for a healthy and equitable online existence.

In 1996, a man in South Africa locked himself into a glass cubicle and mostly limited his contact with the outside world to an internet connection for a few months . “The exciting aspect is realizing just how similar we all are in this growing global village,” he gushed to a reporter on the verge of his release.

What was an oddball stunt 28 years ago now verges on a rough description of daily existence. When Richard Weideman locked himself into that cubicle to stare at a screen all day, only about 1% of the global population was online, and social media was mostly limited to the 5,000 or so members of the WELL, an early virtual community. Now, more than two-thirds of the world is using the internet, a lot – and the global village is not in great shape .

The US Supreme Court is currently attempting to sort out exactly how internet discourse should be experienced. Are YouTube, Facebook and TikTok places where top-down decisions should continue to be made on what to publish and what to exclude? Or, are they more akin to a postal service that’s obliged to convey all views, no matter how unseemly?

By weighing in on two state laws mandating that kind of forced inclusion, the high court could end up ensuring free expression by erasing editorial guardrails – and the average scroll through social media might never be quite the same. A decision is expected by June .

This potential inflection point comes as just about everyone and their grandparents are now very online. Not participating doesn’t seem like an option anymore. “The modern public square” is one way to describe it. Oral arguments before the Supreme Court have surfaced other analogies , like a book shop, or a parade.

Excluding people from marching in your parade might seem unfair. But, it's your parade.

As more people get online, the desire to govern discourse has increased.

More than a century ago, a Supreme Court justice made his own analogy : speech that doesn’t merit protection is the type that creates a clear and present danger, like falsely shouting “fire” in a crowded theater.

“Shouting 'fire' in a crowded theater” has since become a shopworn way to describe anything deemed to cross the free-speech line.

As it turns out, falsely shouting “fire” in crowded places had actually been a real thing that people did in the years before it showed up in a Supreme Court opinion. In 1911, at an opera house in the state of Pennsylvania, it resulted in dozens of people people being fatally crushed; another incident two years later in Michigan killed far more.

More recently, social media services banned certain political messaging because they believed it had also fatally incited people under false pretenses.

Those bans prompted reactive laws in Florida and Texas, triggering the current Supreme Court proceeding. The Texas legislation prohibits social media companies with big audiences from barring users over their viewpoints. The broader Florida law also forbids such deplatforming, and zeroes in on the practice of shadow banning .

That particular form of surreptitious censoring isn’t confined to the US, even if many popular social media services are headquartered there. The EU’s Digital Services Act, approved in 2022, is meant to prohibit shadow banning. In India, users trying to broach touchy subjects online have alleged that it’s happened. And in Mexico, some critics have actually advocated for more shadow banning of criminal cartels.

‘A euphemism for censorship’

In 1969, computer scientists in California established the first network connection via the precursor to the modern internet. They managed to send the initial two letters of a five-letter message from a refrigerator-sized machine at the University of California, Los Angeles, before the system crashed. Things progressed quickly from there.

In 2006, Google blew a lot of minds by paying nearly $1.7 billion for YouTube – an astounding price for something widely considered a repository for pirated content and cat videos.

By 2019, YouTube was earning a bit more than $15 billion in ad revenue annually, and had a monthly global audience of 2 billion users. It’s now at the crux of a debate with far-reaching implications; if the Florida and Texas laws are upheld by the Supreme Court, the site would likely have a much harder time barring hateful content, if it could at all.

That might be just fine for some people. One Supreme Court justice wondered during oral arguments whether the content moderation currently employed by YouTube and others is just “a euphemism for censorship.”

In some ways, we’ve already had at least a partial test run of unleashing a broader range of views on a social media channel. When it was still called Twitter, the site banned political ads due to concerns about spreading misinformation, and even banned a former US president. Now, as “X,” it’s reinstated both .

According to one recent analysis , X’s political center of gravity shifted notably after coming under new ownership in late 2022, mostly by design . The response has been mixed; sharp declines in downloads of the app and usage have been reported.

Richard Weideman, self-made captive of the internet circa 1996.

Government intervention to force that kind of recalibration, or to mandate any kind of content moderation decisions, would likely be unpopular. X, for example, has challenged a law passed in California in 2022 requiring social media companies to self-report the moderation decisions they’re making. The Electronic Frontier Foundation has called that law an informal censorship scheme .

Pundits seem skeptical that the Supreme Court will let the state laws requiring blanket viewpoint inclusion stand. During oral arguments, the court’s chief justice asked whether the government should really be forcing a "modern public square" run by private companies to publish anything. An attorney suggested the result might be so disruptive that, at least until they can figure out how to best proceed, some sites might consider narrowing their focus to “nothing but content about puppies.”

Workarounds are already available to some people who feel overlooked online. Starting an entirely new social media site of their own, for example . Or, if they happen to be among the richest people in the world, maybe buying one that’s already gained a huge audience.

Neither option is very realistic for most of us. And there may be a legitimate case to be made that ubiquitous platforms do sometimes unfairly marginalize certain voices.

(It's also possible that “content about puppies” would be preferable to what’s often available now).

Ultimately, no sweeping legal remedy may be at hand. Instead, we'll likely remain in an uneasy middle ground that only becomes more bewildering as artificial intelligence spreads – mostly relying on algorithm-induced familiarity , maybe wondering if it’s social-media ineptitude or shadow banning that’s keeping us from getting the attention we deserve, and not infrequently stepping out of our online comfort zone to steal a glimpse of something jarring.

More reading on freedom of expression online

For more context, here are links to further reading from the World Economic Forum's Strategic Intelligence platform :

  • “The US Supreme Court Holds the Future of the Internet in its Hands.” The headline says it all. ( Wired )
  • This pole dancer won an apology from Instagram for blocking hashtags she and her peers had been using “in error,” then proceeded to publish an academic study on shadow banning. ( The Conversation )
  • “Social media paints an alarmingly detailed picture.” Sometimes people don’t want to be seen and heard online, particularly if asked for their social media identifiers when applying for a visa, according to this piece. ( EFF )
  • “From Hashtags to Hush-Tags.” The removal of victims’ online content in conflict zones plays in favor of regimes committing atrocities, according to this analysis. ( The Tahrir Institute for Middle East Policy )
  • Have you heard the conspiracy theory about a “deep state” plot involving a pop megastar dating a professional American football player? According to an expert cited in this piece, it’s just more evidence of our current era of “evidence maximalism.” ( The Atlantic )
  • Everyone seems pretty certain that internet discourse has negatively affected behavior, politics, and society – but according to this piece, truly rigorous studies of these effects (and responsible media coverage of those studies) are rarer than you might think. ( LSE )
  • One thing social media services don’t appear to have issues with publishing: recruitment campaigns for intelligence agencies, according to this piece. ( RUSI )

On the Strategic Intelligence platform, you can find feeds of expert analysis related to Media , Law , Digital Communications , and hundreds of additional topics. You’ll need to register to view.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

The Agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} Weekly

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Industries in Depth .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-zh0r2a{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;padding-left:0px;padding-right:0px;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-zh0r2a{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-zh0r2a{font-size:1rem;}} See all

freedom of speech on the internet is an example of

Why having low-carbon buildings also makes financial sense

Guy Grainger

September 18, 2024

freedom of speech on the internet is an example of

Microplastics: Are we facing a new health crisis – and what can be done about it?

Joe Myers and Madeleine North

September 3, 2024

freedom of speech on the internet is an example of

5 must-reads that will get you up to speed on the energy transition

David Elliott

August 19, 2024

freedom of speech on the internet is an example of

From source to stomach: How blockchain tracks food across the supply chain and saves lives

Matthew Van Niekerk

August 12, 2024

freedom of speech on the internet is an example of

From blocked views to free food: How holiday destinations in Japan, Denmark and more are tackling overtourism

Gabi Thesing, Ian Shine and David Elliott

July 25, 2024

freedom of speech on the internet is an example of

How these 5 steel producers are taking action to decarbonize steel production

Mandy Chan and Daniel Boero Vargas

June 25, 2024

  • Skip to main content
  • Keyboard shortcuts for audio player

Supreme Court tackles social media and free speech

Nina Totenberg at NPR headquarters in Washington, D.C., May 21, 2019. (photo by Allison Shelley)

Nina Totenberg

In a major First Amendment case, the Supreme Court heard arguments on the federal government's ability to combat what it sees as false, misleading or dangerous information online.

Copyright © 2024 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

The Supreme Court Will Set an Important Precedent for Free Speech Online

A finger pressign a keyboard.

Do social media sites have a First Amendment right to choose which information they publish on their websites?

That’s the question the Supreme Court will address this term when it reviews two laws from Texas and Florida that would force businesses such as Facebook and YouTube to carry certain content that they do not want to feature. Under the guise of “prohibiting censorship,” these laws seek to replace the private entities’ editorial voice with preferences dictated by the government.

The court’s decision will define the public’s experience on the internet: How much control will the government have over public debate? How vigorous will our online conversations be if platforms feel pressured to avoid controversial topics? What art, news, opinion, and communities will we discover on the platforms that are so central to how we communicate when the government mandates what content and which speakers must appear?

To enable online speech and access to information, the ACLU has long urged social media companies to exercise great caution when deciding whether and how to remove or manage lawful posts. On the very largest platforms, free expression values are best served if companies choose to preserve as much political speech as possible, including the speech of public figures. But, regardless of what platforms ought to permit as a matter of corporate policy, the government can’t constitutionally mandate what they ultimately choose.

freedom of speech on the internet is an example of

The Costs of Forcing an Online Haven for Racists Off the Internet

The ACLU worries about gatekeeper companies blocking even the most heinous speakers because it sets a terrible precedent.

Source: American Civil Liberties Union

Moreover, platforms have no choice but to prioritize some content over others — something always has to come first. They make decisions to remove, demote, or hide lawful content to minimize speech that the business does not want to be associated with, that puts off their consumers or advertisers, and that is of little interest or value to their users. And they don’t all make the same decisions, reflecting their different editorial choices. Facebook, for example, prohibits nudity while Twitter, now X, allows it.

Motivated by a perception that social media platforms disproportionately silence conservative voices, some states have sought to regulate platforms’ curatorial decisions. The Florida law at issue before the Supreme Court prohibits social media companies from banning, in any way limiting the distribution of posts by, or prioritizing posts by or about political candidates; it also prohibits taking any action to limit distribution of posts by “journalistic enterprises.” The Texas law bars larger social media platforms from blocking, removing, or demonetizing content based on the users’ views.

The government’s desire to have private speakers distribute more conservative — or for that matter, progressive, liberal, or mainstream — viewpoints is not a permissible basis for regulating the editorial decisions of private platforms. Choosing what not to publish and how to prioritize what is published is protected expression. In deciding what books to release or sell, publishers and booksellers are unquestionably exercising their free speech rights, as are curators of an art exhibit, and editors deciding what op-eds to publish in a newspaper. The government can’t make the decision for them.

This is why in the lower courts’ review of these laws, the ACLU submitted two friend-of-the-court briefs arguing that it is unconstitutional to force social media and other communications platforms to publish unwanted content.

This has long been settled law. For example, in a case called Miami Herald v. Tornillo , the Supreme Court held that a law requiring newspapers that published criticisms of political candidates to also publish any reply by those candidates was unconstitutional. The law had forced private publishers to carry the speech of political candidates, whether they liked it (or agreed with it) or not. As the Supreme Court explained in striking down the law, a government-mandated “right of access inescapably dampens the vigor and limits the variety of public debate.”

The Supreme Court’s established precedent for protecting editorial discretion applies to online platforms as well. Private speech on the internet should receive at least as much First Amendment protection as print newspapers and magazines do. And social media platforms, in combining multifarious voices, exercise their First Amendment rights while also creating the space for the free expression of their users.

freedom of speech on the internet is an example of

Packingham v. North Carolina

Whether North Carolina can prohibit individuals who are registered sex offenders from “accessing” any social media websites.

These entities shouldn’t be required to publish, and their users shouldn’t be forced to contend with, speech that doesn’t fit the expressive goals of the platform or of the community of users. Nor should platforms be required to avoid certain topics entirely because they don’t want to publish or distribute all viewpoints on those topics. Under the guise of “neutrality,” if these laws go into effect, we will be confronted by a lot more distracting, unwanted, and problematic content when using the internet.

For example, a platform should be able to publish posts about vaccination without having to present the views of a political candidate recommending that people drink bleach to combat COVID-19. Similarly, a platform should be able to welcome posts about anti-racism without having to host speech by neo-Nazis. And a social media site should be able to host speakers questioning the scientific basis for climate change or affirming the existence of God without having to publish contrary viewpoints. If people want any of this material, they can seek it out. But the government cannot force it upon either the platforms or the public that relies on them.

Social media and other online platforms are vital to online speech, enabling us to discuss ideas and share perspectives. Given their significant role, the major platforms should facilitate robust debate by erring on the side of preserving the public’s speech. And if they remove protected content, they should offer clarity upfront as to why and, at a minimum, stick to their own rules. Platforms should also offer opportunities for appeals when they inevitably get things wrong . But the government can’t force platforms to carry the speech or promote the viewpoints that it prefers, any more than it could require a bookstore to stock books it did not want to sell.

Ultimately, users should have as much control as possible over what expression they can access. Even if we think the major platforms could be doing a better job, a government-mandated point of view would be a cure worse than the disease.

Learn More About the Issues on This Page

  • Privacy & Technology
  • Social Networking Privacy
  • Internet Privacy
  • Free Speech
  • Internet Speech

Related Content

Pedestrians walking through Times Square.

DHS Focus on "Soft Targets" Risks Out-of-Control Surveillance

ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections

ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections

A hand (within a car) holding a cellphone displaying a digital drivers license.

State Legislatures Need to Block Creation of Nightmarish National Identity System

Child Safety, Free Speech, and Privacy Experts Tell Supreme Court: Texas’s Unconstitutional Age Verification Law Must be Overturned

Child Safety, Free Speech, and Privacy Experts Tell Supreme Court: Texas’s Unconstitutional Age Verification Law Must be Overturned

  • Ethics & Leadership
  • Fact-Checking
  • Media Literacy
  • The Craig Newmark Center
  • Reporting & Editing
  • Ethics & Trust
  • Tech & Tools
  • Business & Work
  • Educators & Students
  • Training Catalog
  • Custom Teaching
  • For ACES Members
  • All Categories
  • Broadcast & Visual Journalism
  • Fact-Checking & Media Literacy
  • Poynter ACES Introductory Certificate in Editing
  • Poynter ACES Intermediate Certificate in Editing
  • Ethics Training
  • Ethics Articles
  • Get Ethics Advice
  • Fact-Checking Articles
  • IFCN Grants
  • International Fact-Checking Day
  • Teen Fact-Checking Network
  • International
  • Media Literacy Training
  • MediaWise Resources
  • Ambassadors
  • MediaWise in the News

Support responsible news and fact-based information today!

What you need to know about Section 230, the ‘most important law protecting internet speech’

Section 230 grants broad legal protections to websites that host user-generated content, like facebook and google..

freedom of speech on the internet is an example of

A law credited with birthing the internet — and with spurring misinformation — has drawn bipartisan ire from lawmakers who are vowing to change it.

Section 230 of the Communications Decency Act shields internet platforms from liability for much of what its users post.

Both Democrats and Republicans point to Section 230 as a law that gives too much protection to companies like Facebook, YouTube, Twitter, Amazon and Google — with different reasons.

Former President Donald Trump wanted changes to Section 230 and  vetoed  a military spending bill in December because it didn’t include them. President Joe Biden has  said  that he’d be in favor of revoking the provision altogether. Biden’s pick for commerce secretary  said  she will pursue changes to Section 230 if confirmed.

There are  several bills  in Congress that would repeal Section 230 or amend its scope in order to limit the power of the platforms. In response,  even tech companies  have called for revising a law they say is outdated.

“In the offline world, it’s not just the person who pulls the trigger, or makes the threat or causes the damage — we hold a lot of people accountable,” said Mary Anne Franks, a law professor at the University of Miami. “Section 230 and the way it’s been interpreted essentially says none of those rules apply here.”

How did Section 230 come to be, and how could potential reforms affect the internet? We consulted the law and its experts to find out. (Have a question we didn’t answer here? Send it to  [email protected] .)

What is Section 230?

freedom of speech on the internet is an example of

Donna Rice Hughes, of the anti-pornography organization Enough is Enough, meets reporters outside the Supreme Court in Washington Wednesday, March 19, 1997, after the court heard arguments challenging the 1996 Communications Decency Act. The court, in its first look at free speech on the Internet, was asked to uphold a law that made it a crime to put indecent words or pictures online where children can find them. They struck it down. (AP Photo/Susan Walsh)

Congress passed the Communications Decency Act as Title V of the  Telecommunications Act of 1996 , when an increasing number of Americans  started to use  the internet. Its original purpose was to prohibit making “indecent” or “patently offensive” material available to children.

In 1997, the Supreme Court  struck down  the Communications Decency Act as an unconstitutional violation of free speech. But one of its provisions survived and, ironically, laid the groundwork for protecting online speech.

Section 230  says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That provision, grounded in the language of First Amendment law,  grants broad legal protections  to websites that host user-generated content. It essentially means they can’t be sued for libel or defamation for user posts. Section 230 is especially important to social media platforms, but it also protects news sites that allow reader comments or auction sites that let users sell products or services.

RELATED TRAINING: Pay Attention: Legal Issues and Your Media Company

“Section 230 is understood primarily as a reaction to state court cases threatening to hold online service providers liable for (possible) libels committed by their users,” said Tejas Narechania, an assistant law professor at the University of California-Berkeley.

Section 230 changed that. For example, if a Facebook user publishes something defamatory, Facebook itself can’t be sued for defamation, but the post’s original author can be. That’s different from publishers like the New York Times, which can be held liable for content they publish — even if they didn’t originate the offending claim.

There are some exceptions in Section 230, including for copyright infringement and violations of federal and state law. But in general, the provision grants social media platforms  far more leeway  than other industries in the U.S.

Why does it matter?

freedom of speech on the internet is an example of

Sen. Ron Wyden (D-Ore.), one of the authors of Section 230, in 2021. (Demetrius Freeman/The Washington Post via AP, Pool)

Section 230 is the reason that you can post photos on Instagram, find search results on Google and list items on eBay. The Electronic Frontier Foundation, a nonprofit digital rights group,  calls it  “the most important law protecting internet speech.”

Section 230  is generally considered  to be speech-protective, meaning that it allows for more content rather than less on internet platforms. That objective was baked into the law.

In crafting Section 230, Sen. Ron Wyden, D-Ore., and Rep. Chris Cox, R-Calif., “both recognized that the internet had the potential to create a new industry,” wrote Jeff Kosseff in  “The Twenty-Six Words That Created the Internet .”

“Section 230, they hoped, would allow technology companies to freely innovate and create open platforms for user content,” Kosseff wrote. “Shielding internet companies from regulation and lawsuits would encourage investment and growth, they thought.”

Wyden and Cox were right — today, American tech platforms like Facebook and Google  have billions of users  and are among the wealthiest companies in the world. But they’ve also become vehicles for  disinformation  and  hate speech , in part because Section 230 left it up to the platforms themselves to decide how to moderate content.

Until relatively recently, most companies took a light touch to moderation of content that’s not illegal, but still problematic. (PolitiFact, for example, participates in programs run by Facebook and TikTok to  fight misinformation. )

“You don’t have to devote any resources to make your products and services safe or less harmful — you can solely go towards profit-making,” said Franks, the law professor. “Section 230 has gone way past the idea of gentle nudges toward moderation, towards essentially it doesn’t matter if you moderate or not.”

Without Section 230, tech companies would be forced to think about their legal liability in an entirely different way.

“Without Section 230, companies could be sued for their users’ blog posts, social media ramblings of homemade online videos,” Kosseff wrote. “The mere prospect of such lawsuits would force websites and online service providers to reduce or entirely prohibit user-generated content.”

Has the law changed?

The law has changed a little bit since 1996.

Section 230’s first major challenge came in 1997, when America Online was sued for failing to remove libelous ads that erroneously connected a man’s phone number to the Oklahoma City bombing. The U.S. Court of Appeals for the Fourth Circuit  ruled  in favor of AOL, citing Section 230.

“That’s the case that basically set out very expansive protection,” said Olivier Sylvain, a law professor at Fordham University. “It held that even when an intermediary, AOL in this case, knows about unlawful content 
 it still is not obliged under law to take that stuff down.”

That’s different from how the First Amendment treats other distributors, such as booksellers. But the legal protections aren’t limitless.

In 2008, the Ninth Circuit appeals court  ruled  that Roommates.com could not claim immunity from anti-discrimination laws for requiring users to choose the preferred traits of potential roommates. Section 230 was  further weakened  in 2018 when Trump  signed  a package of bills aimed at limiting online human trafficking.

The package created an exception that held websites liable for ads for prostitution. As a result, Craigslist  shut down  its section for personal ads and certain Reddit groups  were banned .

What reforms are being considered?

freedom of speech on the internet is an example of

Sen. Joshua Hawley (R-Mo.) is one of several senators who has introduced a bill to modify or repeal Section 230. (Graeme Jennings/Pool via AP)

In 2020, following  a Trump executive order  on “preventing online censorship,” the Justice Department  published a review  of Section 230. In it, the department recommended that Congress revise the law to include carve-outs for “egregious content” related to child abuse, terrorism and cyber-stalking. The review also proposed revoking Section 230 immunity in cases where a platform had “actual knowledge or notice” that a piece of content was unlawful.

The Justice Department review came out the same day that Sen. Josh Hawley, R-Mo.,  introduced a bill  that  would require companies  to revise their terms of service to include a “duty of good faith” and more transparency about their moderation policies. A flurry of other Republican-led efforts came in January after  Twitter banned Trump  from its platform. Some proposals  would make  Section 230 protections conditional, while others  would repeal  the provision altogether.

Democrats have instead focused on reforming Section 230 to hold platforms accountable for harmful content like hate speech,  targeted harassment  and  drug dealing .  One proposal   would require  platforms to explain their moderation practices and to produce quarterly reports on content takedowns. The Senate Democrats’ SAFE Tech Act  would  revoke legal protections for platforms where payments are involved.

That last proposal is aimed at reining in online advertising abuses, but critics say even small changes to Section 230 could have unintended consequences for free speech on the internet. Still, experts say it’s time for change.

“Section 230 is a statute — it is not a constitutional norm, it’s not free speech — and it was written at a time when people were worried about electronic bulletin boards and newsgroups. They were not thinking about amplification, recommendations and targeted advertising,” Sylvain said. “Most people agree that the world in 1996 is not the world in 2021.”

This article was originally  published by PolitiFact , which is part of the Poynter Institute. It is republished here with permission. See the sources for these facts checks  here  and more of their fact-checks  here .

More about Section 230

  • What journalists should know about Communications Decency Act Section 230
  • Opinion: It’s time to repeal the law that gives social media sites immunity for anything their users post
  • Americans want some online misinformation removed, but aren’t sure who should do it

freedom of speech on the internet is an example of

How one journalist reached remote Indonesian Indigenous communities with media literacy

In a remote area on Sumatra Island, one person decided to empower the community with skills to avoid false information

freedom of speech on the internet is an example of

Meet the 16 journalists chosen for Poynter’s 2024 Power of Diverse Voices writing seminar

They will be immersed in nuanced writing and coaching sessions during the four-day program.

freedom of speech on the internet is an example of

Opinion | What do Harris and Trump gain from all these podcast appearances?

The candidates have appeared on popular shows hosted by Theo Von and Alex Cooper, among others. They offer something different than traditional news.

freedom of speech on the internet is an example of

Opinion | The Los Angeles Times won’t endorse a candidate for president. Its owner offers a nonsensical explanation

Patrick Soon-Shiong says the board was ‘provided an opportunity’ to offer a policy analysis instead. The editor of editorials resigned in protest.

freedom of speech on the internet is an example of

Obits for a paper’s long-time staffer underline new approaches to media competition and collaboration

Tom Condon wrote for the Hartford Courant for 45 years. Another news organization wrote his obit.

Start your day informed and inspired.

Get the Poynter newsletter that's right for you.

  • Search Menu
  • Sign in through your institution
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Numismatics
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Social History
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Meta-Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Legal System - Costs and Funding
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Restitution
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Social Issues in Business and Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Research Methodology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Management of Land and Natural Resources (Social Science)
  • Natural Disasters (Environment)
  • Pollution and Threats to the Environment (Social Science)
  • Social Impact of Environmental Issues (Social Science)
  • Sustainability
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Policy
  • Public Administration
  • Qualitative Political Methodology
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Disability Studies
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Freedom of Speech

  • < Previous chapter
  • Next chapter >

19 The Internet and Social Media

Thomas and Karole Green Professor of Law, Washington University School of Law, St. Louis, United States

  • Published: 10 February 2021
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter surveys the distinctive free speech problems raised by the Internet and social media, discussing the most pressing, prominent issues around Internet speech regulation, with attention to variations across legal systems. It begins by briefly describing the Internet’s communicative architecture. The chapter then looks at structural concerns that have limited online free speech or prompted regulatory attention in the Internet Age. These include inequalities of access; power relationships among governments, private speech intermediaries, and Internet users; and the ways the Internet’s architecture complicates effective regulation. Finally, the chapter considers key substantive issues for online communication, including hate speech, privacy, intellectual property, and the credibility and influence of online news sources.

Theorists and law-makers often posit essential characteristics of human communication to ground free speech principles and protections. New technologies, however, can change communication paradigms in ways that destabilize those principles and protections. This chapter surveys the distinctive free speech problems raised by the Internet and social media. It discusses the most pressing, prominent issues around Internet speech regulation, with attention to variations across legal systems. It assesses the present state of a constantly changing media sector, emphasizing the issues that appear likeliest to matter for the future.

The chapter first briefly describes the Internet’s communicative architecture. It then discusses structural concerns that have limited online free speech or prompted regulatory attention in the Internet Age. These include inequalities of access; power relationships among governments, private speech intermediaries, and Internet users; and the ways the Internet’s architecture complicates effective regulation. Finally, the chapter discusses key substantive issues for online communication, including hate speech, privacy, intellectual property, and the credibility and influence of online news sources.

19.1 The Internet as a Free Speech Medium

The Internet’s technological architecture frames any discussion of Internet free speech problems. 1 In the simplest terms, the Internet is an interconnected worldwide network of computer networks, voluntarily interconnected and bound by the use of shared technical standards. Any user of the Internet can make content available to any other user, either selectively (one-to-one communication) or more broadly (one-to-many communication). 2 The Internet enables one-to-one communication through messaging platforms such as email, chat, and text services. The dominant vehicle for one-to-many Internet communication is the World Wide Web, which uses hyperlinks to cross-reference content across networks. The Internet can deliver all manner of media content, including text, sound, still and moving pictures, and executable programmes. No central authority controls the Internet’s infrastructure or the flow of information online. 3 As of 2018, more than four billion people worldwide communicated on the Internet. 4

Social media platforms such as Facebook, Twitter, and Instagram add to the Internet’s utility by letting users form online communities for sharing text, pictures, video, and links to Web content. 5 Different social media platforms enable different models of community formation. A speaker on an asymmetric social network, such as Twitter, can make messages generally available to whomever chooses to access them. Speakers on a symmetric social network, such as Facebook, choose to associate as ‘friends’, and a speaker can choose to send messages either to all or a subset of their friends. More than three billion people worldwide used social media as of 2018, a 13 per cent increase over the prior year. 6 Social media platforms’ increasing prominence has made their functions and characteristics crucial in determining the scope and forms of Internet speech regulation.

The advent of smartphones–wireless hand-held devices that allow portable Internet connectivity—has substantially augmented Internet communication. The ability to carry the Internet’s expressive capacities wherever the user goes has made smartphones an increasingly important vehicle for online communication. 7 The practical utility of smartphones and the communal function of social media complement and enhance one another.

Given unrestricted access to the Internet, anyone can view news and information from the other side of the world as easily as local content. This transnational scope has powerful implications for the utility and freedom of Internet speech. Online communications of all sorts can reach across national and cultural divides. Content providers can evade direct regulations by hosting information in low-regulation jurisdictions. The Internet’s transnational reach makes the problems of Internet free speech transnational as well.

The freedom of Internet speech matters deeply, at a normative level, because the Internet offers unprecedented opportunities for realizing the social benefits of free speech. 8 The United Nations has passed a resolution that calls on member states to protect the right to access and disseminate information on the Internet. 9 The US Supreme Court, in its first encounter with an effort to regulate the Internet, extolled the Internet’s ‘vast democratic forums’, where anyone formerly limited to passive receipt of information can now speak to the world. 10 Canada’s telecommunications regulator has similarly stated that ‘fixed and mobile wireless broadband Internet access services are catalysts for innovation and underpin a vibrant, creative, interactive world’. 11 Online communication has made powerful contributions to political dynamism and dissent around the world, especially in countries with authoritarian governments. 12 To take one vivid example, political dissidents in Arab nations centrally used social media in early 2011 to organize and spread the wave of mass political protests known as the Arab Spring. 13 Beyond politics, the Internet promotes and stimulates artistic creativity, scientific inquiry, and commercial exchange.

Just as the Internet enhances the benefits of speech, it enhances the harms that speech can cause and that governments may seek to curb. The Internet creates new ways for both governments and wealthy private institutions to aggrandize power over people and communities. Social media occupy a massive cultural space while arguably promoting limited modes of social interaction that crowd out other forms of human connection. Liberal protections for free speech online must take account of the Internet’s hazards as well as its promise.

19.2 Structural Free Speech Problems

How well any mass medium facilitates free speech depends on the medium’s political, economic, and technical structures. Who owns, operates, and benefits from the medium’s infrastructure? Who can use the medium, and what conditions affect access to it? What methods can governments use to restrict speech on the medium? The Internet’s distinctive architecture presents a set of interconnected structural free speech problems.

19.2.1 Inequalities of Access to the Internet

The freedom of speech includes an important dimension of distributive justice. 14 The most basic structural determinant of a communication medium’s effectiveness is the medium’s accessibility to users. In pre-Internet mass media, scarcity of resources, from printing paper to the broadcast spectrum, created a sharp divide between providers and consumers of information. The Internet’s decentralized, many-to-many communication architecture largely obviates that sort of resource concern as to online speech. The Internet has drastically reduced the cost of making information available to a mass audience. 15 Two important resource problems, however, still restrict opportunities to communicate online.

First, many people around the world cannot access the Internet at all. Poorer nations, and poorer people within affluent nations, often cannot afford the keys to the Internet’s kingdom. A 2016 report by the International Telecommunications Union found a stark divide in ‘Internet penetration’: the percentages of different nations’ residents who have Internet access. Developed countries, on average, have about 70 per cent penetration rates, while developing countries have only about 15 per cent penetration. Europe leads the world, with Internet access for more than 80 per cent of its residents. Several European nations (Iceland, Luxembourg, Norway) have penetration rates over 95 per cent. In contrast, Asian nations and the Arab states have penetration rates of only about 40 per cent. Africa trails the world with a penetration rate of only 10 per cent. At least ten African nations have penetration rates below 7 per cent. 16 People without Internet access are disproportionately less educated, rural, elderly, and female. The cost of Internet access is a serious factor in the penetration disparity: in the developing world, a fixed broadband connection can cost a large percentage of a family’s household income. 17

Second, even as the Internet has lowered the cost of entry into mass communication, the cost for speakers to actually reach a mass audience remains high. The Internet’s flood of information has exposed the importance of a communication resource long taken for granted: human attention. 18 No one can process all the information the Internet makes available on any given subject. People must choose from among the vast array of information sources online, a reality that places a great premium on content providers’ capacity to influence and steer audience choices. The pivotal value of audience attention creates advantages for larger, better-financed content providers, which can devote more resources to capturing audience attention. Thus, even though audiences have myriad options for finding information online, limited numbers of content providers dominate important online information environments, much like large media companies were able to do in the pre-Internet era. 19

19.2.2 Concentrations of Private Power

Freedom from government regulation does not guarantee a lived experience of expressive freedom. Private speech intermediaries—most notably Internet Service Providers (ISPs), search engines, and social media platforms—largely dictate how information flows among Internet users. All of these entities exist not to promote free speech values but rather to make money. Private intermediaries, while generally lacking governments’ political motives and accountability, strongly influence the social and political valences of online speech. 20 Authoritarian regimes and even some democratic governments enlist intermediaries to restrict the public’s access to disfavoured speech. 21 More commonly, the nature and power of Internet speech intermediaries exacerbate a difficult, long-standing problem for free speech theory: whether and to what extent governments may and should regulate intermediaries to promote free speech.

Governments have often sought to avoid this problem through commercial regulations that limit intermediaries’ ability to control communication. Such structural regulation constrains not the content of speech but the commercial mechanisms by which private intermediaries control and channel information. Constitutional speech protections generally do not constrain governments from regulating these structural features of communications media. 22 As long as governments do not regulate with the purpose or effect of stifling particular ideas, they generally do not breach free speech barriers. Thus, in the pre-Internet era, governments often restricted, for example, the number of newspapers or broadcast stations that any person or company could own. 23

The most prominent issue for structural regulation of Internet speech has been net neutrality. ISPs want the latitude to deliver different online content for different prices and at different speeds. ‘Net neutrality’ means a legal mandate that ISPs deliver service without discriminating, in pricing or terms, among sources or contents of speech. Advocates see net neutrality as essential for realizing the democratic promise of the Internet. They point out that, without net neutrality, ISPs can discriminate against content providers whose messages the ISPs disfavour and can marginalize content providers unable to pay premium prices for higher speed or higher volume services. 24 Opponents of net neutrality most commonly argue that letting ISPs set prices and terms of service will create the most efficient conditions for Internet communication. 25

Governments around the world have enacted net neutrality mandates. Chile in 2010 became the first nation to adopt net neutrality. 26 The European Union maintains net neutrality regulations, enforced by a transnational regulator. 27 Canada protects net neutrality under its Telecommunications Act, which treats ISPs as utilities. 28 Critics, however, have complained that loopholes and exceptions frequently undermine net neutrality regimes. For example, Canadian regulators have allowed ISPs to impose differential charges for residential Internet service based on usage rates. 29 Such controversies underscore the difficulty of setting and enforcing a baseline of ‘neutrality’ in any legal setting, let alone a setting as complex and fluid as Internet speech.

Perhaps the most contentious battle over the legality and wisdom of net neutrality has played out in the United States. The Obama administration in 2015 imposed a net neutrality mandate through federal regulations, but the Trump administration in 2017 rescinded those regulations. 30 Legal arguments about net neutrality in the United States are taking on an increasingly constitutional cast. Some advocates for net neutrality have argued that the First Amendment’s free speech protections require non-discriminatory access to information and audiences and thus compel net neutrality. 31 Conversely, net neutrality opponents including the recently appointed US Supreme Court Justice Brett Kavanaugh argue that net neutrality violates the First Amendment by restricting ISPs’ editorial autonomy to channel content as they see fit. 32 The success of First Amendment arguments either for requiring or for barring net neutrality would represent a major paradigm shift in what US courts so far have treated as a non-constitutional question of structural regulation.

19.2.3 Practical Problems of Content Regulation

A final set of structural issues for Internet free speech, which interact with some of the substantive issues discussed below, concerns the conceptual and logistical challenges that Internet technology presents when governments seek to regulate the content of speech.

A difficult baseline question about substantive regulation is whether, and to what extent, data on the Internet counts as speech that deserves constitutional protection. The Internet can produce and transmit speech in ways that differ from prior media. Substantial aspects of the Internet’s processes for generating data, notably autonomous and semi-autonomous algorithms, do not depend on the direct control by human beings that ordinarily characterizes constitutionally protected speech, and much raw online data does not convey ideas in any conventional sense. 33 In the United States, First Amendment law provides very strong protections for speech but has not fully theorized what counts as speech. 34 Scholars disagree about the extent to which information transmitted online deserves First Amendment protection. 35 The US Supreme Court has suggested but not decided that the First Amendment protects online data flows. 36 Given the importance of US-based intermediaries for Internet communication, US law’s resolution of the data-as-speech question may carry great significance for the nature and extent of Internet speech regulations.

Where a government can justify a given regulation of Internet content, what sort of regulation is logistically possible? Direct regulation of online speech is often difficult. The transnational character of online communication means that online speakers can frequently elude national regulatory regimes. The enormous volume of online speech can make content regulation very expensive. To take one example, an attempt in the United States to directly restrict ‘indecent’ online content failed constitutional review in part because the Internet’s architecture compelled regulators to cast a very wide net and still created major impediments to the regulations’ effectiveness. 37 Totalitarian and authoritarian regimes can, and often do, stifle disfavoured content by imposing blanket controls on Internet access. 38 Democratic governments have sought to constrain online speech through indirect regulation, using incentives and penalties to make private speech intermediaries regulate content. Critics have called this approach ‘soft censorship’. 39 The discussion of substantive Internet speech problems below shows how democratic governments have used indirect regulation to curb hate speech, protect privacy, and secure intellectual property.

The difficulty of directly regulating online content and the greater ease of indirectly regulating speech intermediaries strongly influence how law and public policy interact to promote and restrain Internet speech. Online intermediaries have some characteristics of content providers or editors and other characteristics of mere speech conduits. Search engines use proprietary algorithms, as distinct from human editors, to organize and present information about content that third parties have created. 40 Social media platforms likewise use proprietary algorithms to direct users towards particular content created by other users. In general, a claim for free speech protection from government regulation depends on the premise that the claimant is creating content or at least exercising substantial editorial discretion. Conversely, a party’s control over the content of speech can justify making the party liable for harms the speech causes. Thus, the stronger an intermediary’s case for free speech protection, the stronger may be the government’s motivation to impose speech restrictions. 41 This paradox gets more complicated for online intermediaries whose functions and methods muddy the distinction between speakers/editors and conduits. Sorting out intermediaries’ amenability to regulation becomes especially important when the intermediaries’ pursuit of their self-interests arguably compromises the free speech interests of Internet users.

19.3 Substantive Free Speech Problems

In contrast to the structural regulations discussed above, substantive regulations deliberately target the content of speech in order to achieve some regulatory goal. Authoritarian regimes regularly impose sweeping substantive restrictions on Internet speech. 42 Democratic political systems’ constitutional speech protections generally bar substantive speech regulations, subject to varying rules for balancing speech interests against regulatory priorities. Indirect regulatory strategies, under which governments induce private speech intermediaries to restrict online speech, may avoid constitutional bars on substantive regulation while still presenting serious threats to free speech norms and to expressive freedom in practice.

This section discusses several substantive free speech problems with high salience for Internet speech. The problems this section will discuss—hateful and defamatory speech, the interaction of speech and privacy interests, intellectual property, and concerns about the democratic effectiveness of the news media—have long figured heavily in free speech law and theory. The increased speed and scope of Internet communications exacerbate the harms that speech can cause and that governments therefore may seek to ameliorate. The Internet also increases the potential benefits of speech, increasing the urgency of robust constitutional speech protections.

19.3.1 Hateful and Defamatory Speech

The Internet’s capacity to give any speaker access to a large audience has had the damaging consequence of propagating harmful and socially corrosive speech. Prominent among the harmful sorts of speech the Internet has amplified are both defamation of individuals’ reputations and ‘hate speech’: denigration of groups’ identities. Attacks on individual reputations and group identities can do much greater damage online, because of the speed at which the attacks can take hold with large audiences. The Internet has become a tool for all manner of hate groups, from white supremacists and neo-Nazis to jihadists, to spread their venom and recruit members. 43 Democratic societies have dealt with these problems in divergent ways. The United States, while allowing substantial liability for speech that defames individuals, has a strong First Amendment bar against restricting hateful or derogatory speech. 44 Other democratic governments balance free speech rights against contrary societal and dignitary interests, creating space to regulate Internet hate speech.

The European Union Charter of Fundamental Rights, for example, sets rights of expression and information, 45 as well as assembly and association, 46 alongside, and not above, rights of individual dignity, 47 personal and data privacy, 48 and equality. 49 That juxtaposition of rights creates space for restricting hate speech in order to protect dignitary and equality interests. The EU, seeking to address Internet hate speech through indirect regulation, has persuaded ISPs and other providers of access for online speech to enter into an agreement that requires the companies to remove hateful speech, defined by reference to the laws of member states, posted on the companies’ platforms. 50 Among EU member states, Germany has taken the most aggressive legal approach to hate speech, specifically targeting social media. A German statute requires social media companies to police their platforms and remove hate speech, in some cases within twenty-four hours. Non-compliance can trigger fines of up to €50 million. 51 European states have also used intermediary regulation to remedy speech that defames individuals. For example, the Italian Supreme Court held the director of a website hosting company criminally liable for defamatory statements posted on a site the company hosted. 52

Other democratic systems impose direct legal liability on Internet users who post hate speech. In Australia, criminal law prohibits the use of a telecommunications carriage service to deliver menacing, harassing, or offensive messages. 53 Citizens may also bring civil actions to remedy hate speech. 54 Based on such civil suits, the Federal Court of Australia has mandated the removal from the Internet of posts that denied the Holocaust 55 and has ordered a newspaper to publish corrective notices for posts that maligned the ethnic identities of fairer-skinned Aboriginal Australians. 56 In Canada, statutory prohibitions of hate propaganda have supported legal judgments against online hate speech. The Canadian Human Rights Tribunal has imposed sanctions against Internet posters for harshly attacking Jews, Afro-Canadians, and other minority groups 57 and for posting denials of the Holocaust. 58 The Canadian Supreme Court, in a case involving a provincial restriction on non-Internet speech, validated those Tribunal decisions when it substantially reaffirmed the constitutionality of the legal standard that forms the prevalent model for Canadian hate speech prohibitions. 59

Both protection and restriction of online hate speech present problems. On one hand, the United States has seen a strong increase in hate group activity, much of which reflects the effectiveness of the Internet as a propaganda medium. 60 On the other hand, restrictions on hate speech raise questions about how far the category of prohibited hate speech should extend. Some prominent topics of public discussion, notably immigration and religious extremism, inspire irrational hatred among some members of the public while generating serious public policy discussion among others. The European approach of indirectly regulating hate speech limits public denigration of groups without imposing government censorship. However, indirect regulation empowers private intermediaries, which owe no duty to the public and operate under no free speech mandate, to resolve socially fraught collisions between free speech and equality values. Intermediaries’ interest in avoiding liability gives them strong incentives to err on the side of censorship. 61 Canada’s and Australia’s direct prohibitions on speakers may encourage self-censorship, excessively limiting the scope of online discourse.

19.3.2 Speech and Privacy

Free speech rights frequently interact with privacy interests. The Internet has given rise to at least two sorts of problems that connect speech with privacy. One sort of problem arises when governments compromise privacy in ways that may degrade free speech. The other sort of problem arises when governments seek to safeguard privacy in ways that may threaten free speech.

19.3.2.1 The Internet as Government Surveillance Tool

Governments have long invoked national security concerns to justify constraints on civil and political rights, especially the right to free speech. The Internet exacerbates that threat to rights by enabling government surveillance of private communications on a scale never before possible. The Internet lets governments gather troves of data from anyone and everyone who communicates online. Mass Internet surveillance threatens free speech online in at least two ways. First, the mere fact of surveillance casts a shadow of potential government sanction over whatever people say or hear. The ability to think and communicate outside public view stimulates thought and communication. 62 Research shows that fear of government surveillance deters people from entering controversial terms in search engines 63 and from posting controversial political messages on social media. 64 Second, the grim counterpoint to social media’s positive role in the Arab Spring is governments’ use of the Internet to monitor, repress, and counteract dissent. Authoritarian governments commonly track online activity to identify and punish political dissidents. 65

Democratic governments, though more restrained, still use the Internet heavily for domestic surveillance. A British government programme called Optic Nerve, operating over at least a four-year period, indiscriminately captured millions of Web camera images from Yahoo users. The government used the images for facial recognition experiments and monitoring intelligence targets, although it suspected virtually no one whose image it captured of any wrong-doing. 66 The British also helped other European governments ramp up mass surveillance of their own populations. 67 The Canadian government used free Wi-Fi service at airports to spy on travellers. 68 The US government following the 2001 terrorist attacks vastly increased its spying on citizens. 69 In 2013, journalists revealed that a secret US government programme called Prism had for years been gathering data on Internet users directly from the servers of the world’s leading technology companies and speech intermediaries, including Apple, Google, and Facebook. 70

Threats to privacy from mass surveillance do not begin and end with government action. Private companies often use the Internet and social media platforms to gather information about users, in order to target advertising and other messages. During the 2016 US national election, for example, a right-wing data analytics firm called Cambridge Analytica gathered tens of millions of Facebook profiles in order to help target political appeals to sympathetic audiences. 71

The need to balance free speech values against legitimate security interests poses a critical challenge for ensuring open, uninhibited communication online. As with other free speech issues, the transnational character of Internet communication complicates the issue of mass surveillance, as one country’s surveillance will inevitably sweep in other countries’ citizens and may well violate other countries’ laws. 72 In addition, the complicated matter of when and how online intermediaries, predominantly based in the United States and thus subject to US law, share information with other countries’ law enforcement agencies has important implications for the freedom of online speech. 73

19.3.2.2 The Right to Be Forgotten

A different sort of conflict between Internet speech and privacy arises from the availability online of personal information. As with other Internet free speech issues, the problem has its roots in the Internet’s vast scope and wide reach. More information about individuals is available on the Internet than on any prior medium. A quick online search can collect personal data from many sources at once, and online information generally stays fresh and accessible in perpetuity. The idea of ‘the right to be forgotten’ holds that personal privacy interests should give people legal authority to have information about them expunged once the information is no longer socially useful. 74 What makes the right to be forgotten practically feasible is the central role in Internet communication of search engines, especially Google. Search engines have centralized the function of accessing information to an extent unimaginable in earlier mass media. Expunging information from search engines can make the information, as a practical matter, disappear from public view.

The EU has staked out a strong commitment to the right to be forgotten. As with hate speech, the EU approach focuses on enlisting intermediaries, in this case search engines, to remove offending information posted by others. In Google Spain SL v Agencia de Protección de Datos , 75 the European Court of Justice (ECJ) held that search engines are ‘data controllers’ under applicable EU directives. That characterization makes search engines susceptible to national authorities’ demands to remove personal information. The Court further held that national authorities could properly compel Google to remove search information despite the fact that the company processed the search data in the United States, not in Europe. Underscoring the Internet’s distinctive stakes for privacy interests, the Court rejected the privacy claimant’s parallel demand that a newspaper remove articles that referred to his past activities. Google Spain establishes two broad premises that support a robust right to be forgotten. First, the public’s interest in access to information about an individual’s past activities should yield, over time, to the individual’s interest in having the information expunged. Second, the importance of personal privacy interests justifies imposing a duty on search engines to manage data in ways that comport with privacy rights.

Unlike the matter of hate speech, where the United States stands largely alone among Western democracies in strongly prioritizing free speech over competing interests, an aversion to the right to be forgotten largely unites common law jurisdictions, notably including Canada and the United Kingdom as well as the United States. 76 One factor in the divergence between these states and Europe may be the differing constitutional status of privacy rights. Whereas the EU Charter of Fundamental Rights includes explicit textual protections for privacy, neither the US Constitution nor the Canadian Charter of Rights and Freedoms contains privacy language. Instead, privacy rights in the United States and Canada have developed through judicial extrapolations from other rights. The UK stands in a complicated position. Legally and culturally, the UK resembles the US and Canada in its prioritization of free speech over personal privacy interests. However, the UK as an EU member state was bound by the broad principles of Google Spain . The UK’s departure from the EU creates a possibility for the UK to disavow the right to be forgotten. However, EU trade and other regulations that mandate respect for rights recognized by the EU may sustain the UK’s commitment to the right to be forgotten. 77

The right to be forgotten, as recognized in the EU, is far from absolute. Google has rejected more requests to remove information than it has granted, reflecting an understanding that the proper balance of privacy against free speech varies with the circumstances of particular disputes. 78 Still, the right to be forgotten can undercut free speech in important ways. First, successful invocations of the right to be forgotten can have worldwide sweep. France, for instance, has sought to make Google remove offending information not just from its domestic search engine but from any search results accessible in France. 79 Second, as with the European arrangement for private speech intermediaries to police hate speech, the Google Spain framework gives private speech intermediaries power over free speech and incentives to prioritize privacy over free speech in order to avoid liability. Third, the free speech cost of the right to be forgotten—unavailability of information—is diffuse and may be invisible, while the right’s privacy benefits are concentrated and palpable. This dynamic encourages overprioritizing privacy interests.

19.3.3 Free Speech versus Intellectual Property

The speed and scope of information delivery on the Internet, along with online speakers’ capacity to recontextualize and recombine information, have made the Internet a rich engine for cultural production and creativity. 80 Those same qualities give rise to a range of intellectual property issues. 81 In particular, the ease of downloading digital files enables large-scale violations of intellectual property laws, often across national borders. For the present discussion, intellectual property rights on the Internet matter to the extent they complicate or undermine rights to free expression, particularly the right of access to information. Not all creators of online content seek strong intellectual property protections. Indeed, through Creative Commons licences and other sharing initiatives, the Internet has dramatically increased the amount and variety of information freely available to the public. 82 However, in democratic societies that generally resist government censorship of the Internet, intellectual property’s status as a private right and its grounding in liberal ideology make intellectual property rights a formidable antagonist to the freedom of online speech.

Experience in several democratic societies shows how intellectual property rights can mark an outer boundary of Internet free speech. US statutory law grants online speech intermediaries broad immunity from liability for harm from speech they post. 83 However, a different statute requires private Internet intermediaries to remove content from their services when copyright holders allege that the content violates their intellectual property rights. 84 This ‘notice and takedown’ system represents the apex in US law of indirect regulation as a means of imposing substantive limits on Internet speech. 85 Australia imposes a similar statutory requirement for intermediaries to remove content posted overseas with the ‘primary purpose’ of infringing intellectual property rights. 86 Elsewhere, judicial decisions have imposed obligations on Internet speech intermediaries to protect intellectual property rights. In a potentially wide-ranging case, Canada’s Supreme Court imposed liability on Google when the company refused to exclude from its search results the entire online domain of a company that had unlawfully copied and marketed a competitor’s products. 87 All of these restrictions on Internet speech reflect how the speed and scope of Internet communications intensify intellectual property concerns.

The Internet’s distinctive means of conveying and presenting information deepen the tension between free speech and intellectual property rights. When you view content on the Internet, your computer displays a visible copy of the content and often creates and stores a cached copy. Does making those copies infringe the rights of copyright holders? In the landmark 2014 decision Public Relations Consulting Ltd v Newspaper Licensing Agency , 88 the ECJ held that routine reproduction of copyrighted material in the course of Internet browsing falls outside the limits of copyright protection as recognized by EU law. A finding that ordinary Web browsing violated the law whenever a search or link led to copyrighted material would have dwarfed hate speech regulations or the right to be forgotten as a restriction on Internet speech. Although the ECJ did not rest its holding on the Charter’s protections for expressive freedom, the case shows the crucial role of legal institutions in preserving open public access to online information.

Much of the Internet’s value for free speech lies in its openness, the ease with which it connects people to information, and its transcendence of national borders and legal systems. All of those qualities push against conventional regimes of intellectual property protection, which use state authority to restrict the reach and availability of information. The moral rights recognized in some European systems have a deeper normative grounding than economic rights and thus may create an even stronger tension with Internet norms. 89 The conceptual disconnect between Internet communication and intellectual property regimes may ultimately compel liberal societies to reconsider whether, or to what extent, a property model can effectively balance content creators’ profit incentives and moral interests against the public’s interest in access to information.

19.3.4 The Reliability and Influence of Internet News Sources

One important consequence of the Internet’s growth has been a profound change in how people get news about matters of public concern. The Internet has largely wiped out old models of news distribution, under which a relatively small number of television and radio stations, daily newspapers, and magazines dominated the news landscape. 90 Those news outlets depended for their market dominance on the high cost of reaching large audiences. The Internet, by slashing the cost of communication, has changed the production and distribution of news in ways that create both opportunities and problems for democratic societies.

On the positive side, the Internet has fostered a new generation of citizen journalists and commentators on current events. 91 Old systems of news distribution often presented only a narrow range of methodologies and viewpoints, especially in societies that valued news companies’ profits more than their social benefits. 92 The Internet opens journalism to people and groups, such as women in some religiously conservative societies, who lack opportunities to speak through established mass media. 93 Anyone can now perform the basic journalistic functions of gathering, analysing, and disseminating information of interest to the general public. Large numbers of independent sources can cover important news events, and the low cost of posting information online gives audiences access to all of those sources. In addition, citizen journalists may challenge established norms of what constitutes an important news story. 94 Professional journalists can benefit from the work of citizen journalists by using citizen journalism as raw material, yielding a greater variety of inputs and reflecting a wider range of perspectives. Citizen journalism at its best can break down rigid barriers between producers and consumers of news. 95

Two characteristics of Internet news, however, create serious hazards for journalism. The first is the loss of professional standards that has attended the decline of the old news media. Journalists in the pre-Internet era routinely had professional training, and news outlets often operated under published standards for competence and ethics. 96 These credentials and standards served to promote public trust in news outlets, at least within broad normative boundaries. Even the most conscientious citizen journalists generally lack formal training and professional structures. They may not check facts as thoroughly as professional journalists do, and they may not observe professional journalistic norms such as requirements of multiple sources for factual claims. 97 Larger online news companies may simply ignore those norms as irrelevant to maximizing profits. The challenge that Internet news poses to established systems of credibility can be liberating and democratizing. However, the dizzying variety of news outlets online, combined with most of those outlets’ failure to meet professional standards of journalistic reliability, can leave audiences uncertain about which if any news they can trust. The rise of social media has deepened the problem. Facebook and Twitter have become dominant engines of news delivery. 98 The ease of access to social media means that online audiences sometimes get the worst of both news worlds: reports that combine the market penetration of pre-Internet news behemoths with the ethical commitments of, at best, amateur bloggers or, at worst, wilful propagandists.

The second problem the Internet creates for journalism is the fragmentation of audience attention. The Internet can provide every user with customized, personally tailored information. 99 This capacity both enhances individual autonomy and facilitates the creation of virtual communities that promote all manner of collective social goals. 100 At the same time, the Internet’s customization of information, especially as to news about matters of public concern, threatens social cohesion by tying distinct communities to divergent sources of information. Social media platforms strongly exacerbate this threat by allowing users to build insular communities in which members reinforce one another’s beliefs and biases. 101 As a result, post-modern conceptions of radical uncertainty about truth have become directly relevant for day-to-day social reality. 102 For news in particular, the decline of consensus around basic premises means that democratic citizens can get information through filters that comport with their ideological biases and the biases of their communal enclaves. To take two vivid examples, social media news polarization strongly influenced UK voters’ positions on the 2016 European Union referendum 103 and US voters’ political views in the run-up to the 2016 national election. 104 At the extreme, virtual echo chambers can become fertile ground for extremist political movements to isolate and radicalize vulnerable and alienated individuals.

The combination of diminished professional standards and splintered audience perceptions of reality creates dangerous opportunities for using news reports to manipulate public opinion. In totalitarian and authoritarian societies, governments brazenly manipulate the Internet and social media to spread propaganda and undermine democracy at home and abroad. 105 In democratic societies, unscrupulous actors use the Internet and social media to covertly promote their political agendas. The most notorious instance is the Russian government’s secret use of Facebook and Twitter to influence the 2016 US presidential election in favour of the eventual narrow winner, Donald Trump. 106 The self-interest of social media platforms raises further concerns about the quality of information those platforms make available to audiences. For example, YouTube enhances its advertising revenues by demonetizing less popular channels and steering users towards more sensational videos. 107 A similar problem of reliability arises when search engines seek to profit by rigging search results. 108

The Internet’s fractured news environment raises difficult questions about whether and how democratic governments can or should try to exert control over news intermediaries in order to promote the public good. Both EU 109 and US 110 regulators have proposed requiring social media platforms to tell users more about the sources of information in their feeds. Further approaches might include public subsidies to promote investigative journalism; structural regulations to promote economic competition among online intermediaries; and indirect content regulations, like the European approach to hate speech and the US approach to intellectual property, which would require major news intermediaries, including social media platforms, to monitor the content they propagate and to remove false or misleading information. All of those approaches present problems. Public subsidies raise worries that government might try to influence news content. Structural regulation may not be feasible for online intermediary functions like search engines and social media, which have tended towards market dominance by one or a few companies. Indirect regulation of news content would, as in other settings, empower private intermediaries to make important decisions about social life. That problem would be especially worrisome as to news, which plays a crucial role in facilitating democratic self-government.

19.4 Conclusion

The Internet has emerged as our dominant medium of communication, and it never stops evolving. Efforts to address the online free speech problems discussed in this chapter, from private power and inequality to hate speech and the credibility of news sources, have only just begun. More than ever, different legal regimes’ divergent approaches to free speech problems may serve as laboratories of democracy, especially as the Internet’s transcendence of national boundaries brings those approaches into direct conflict. The Internet increases both the good and the harm that speech can do. It therefore intensifies the challenges of protecting free speech while also finding regulatory strategies, consistent with the values of liberal democracy, to address damaging speech.

Lawrence Lessig, Code and Other Laws of Cyberspace, Version 2.0 (Basic Books 2006).

For a discussion of the development and structure of the Internet’s architecture, see Manuel Castells, The Internet Galaxy: Reflections on the Internet, Business, and Society (OUP 2001) 9–33.

The Internet requires some institutional co-ordination, for example in the assignment and maintenance of domain names. See ‘Domain Name Registration Process’ ( ICANN WHOIS , July 2017) < https://whois.icann.org/en/domain-name-registration-process > accessed 28 August 2018.

Simon Kemp, ‘Digital in 2018: World’s Internet Users Pass the 4 Billion Mark’ ( We Are Social , 30 January 2018) < https://wearesocial.com/blog/2018/01/global-digital-report-2018 > accessed 28 August 2018.

For a general description and discussion of social media, see Andreas M Kaplan and Michael Haenlein, ‘Users of the World, Unite! The Challenges and Opportunities of Social Media’ (2010) 53 Business Horizons 59.

Kemp (n 4).

The US Supreme Court, for example, has applied constitutional protections against warrantless searches to the digital contents of mobile phones. Riley v California 134 S Ct 2473 (2014).

At the same time, personal communication in the physical world retains distinctive value that the Internet cannot replace. See Timothy Zick, Chapter 20 in this volume.

Office of the High Commissioner for Human Rights (OHCHR), ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ (7 April 2018) UN Doc A/HRC/38/L.10/Rev.1.

  Reno v American Civil Liberties Union , 521 US 844, 868 (1997).

Telecom Regulatory Policy CRTC 2016–496 ( Canadian Radio-Television and Telecommunications Commission , 21 December 2016) < https://crtc.gc.ca/eng/archive/2016/2016-496.htm > accessed 28 August 2018.

Kris Ruijgrok, ‘From the Web to the Streets: Internet and Protests under Authoritarian Regimes’ (2017) 24 Democratization 498.

Sarah Joseph, ‘Social Media, Political Change, and Human Rights’ (2012) 35 BC Int,l & Comp L Rev 145.

Jerome A Barron, ‘Access to the Press: A New First Amendment Right’ (1967) 80 Harv L Rev 1641; Kenneth L Karst, ‘Equality as a Central Principle in the First Amendment’ (1975) 43 U Chi L Rev 20; Owen M. Fiss, ‘Free Speech and Social Structure’ (1986) 71 Iowa L Rev 1405.

Eugene Volokh, ‘Cheap Speech and What It Will Do’ (1995) 104 Yale LJ 1805.

‘Measuring the Information Society Report’ (2016) International Telecommunications Union < https://www.itu.int/en/ITU-D/Statistics/Documents/publications/misr2016/MISR2016-w4.pdf > accessed 28 August 2018.

Tim Wu, ‘Is the First Amendment Obsolete?’ ( Knight First Amendment Institute , September 2017) < https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete > accessed 28 August 2018.

Gregory P Magarian, ‘Forward into the Past: Speech Intermediaries in the Television and Internet Ages’ (2018) 71 Okla L Rev 237.

For a discussion of one important example, see Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU P 2018).

Lyombe Eko, ‘Google This: The Great Firewall of China, the It Wheel of India, Google Inc., and Internet Regulation’ (2011) 15 J Internet L 3.

See Dieter Grimm, ‘Freedom of Media’, Chapter 29 in this volume.

C Edwin Baker, Media Concentration and Democracy: Why Ownership Matters (CUP 2007) 163–89 (discussing a range of actual and proposed policies for dispersal of media ownership).

Dawn C Nunziato, Virtual Freedom: Net Neutrality and Free Speech in the Internet Age (Stanford Law Books 2009).

Christopher S Yoo, ‘Network Neutrality and the Economics of Congestion’ (2006) 94 Geo LJ 1847.

Law no 20,453 (Biblioteca del Congreso Nacional de Chile, 26 August 2010) < https://www.leychile.cl/Navegar?idNorma=1016570 > accessed 28 August 2018.

Regulation (EU) 2015/2020; ‘All You Need to Know about Net Neutrality Rules in the EU’ ( Body of European Regulators for Electronic Communications ) < https://berec.europa.eu/eng/netneutrality/ > accessed 28 August 2018.

  Telecommunications Act , SC 1993, c 38.

  Telecom Decision CRTC 2011–44 , Canadian Radio-Television and Telecommunications Commission (25 January 2011).

Cecelia Kang, ‘FCC Repeals Net Neutrality Rules’ ( New York Times , 14 December 2017) < https://www.nytimes.com/2017/12/14/technology/net-neutrality-repeal-vote.html > accessed 28 August 2018.

Moran Yemini, ‘Mandated Network Neutrality and the First Amendment: Lessons From Turner and a New Approach’ (2008) 13 Va J L & Tech 1.

  US Telecom Ass’n v Fed Communications Comm’n No 15–1063 (DC Cir 1 May 2017) (Kavanaugh J, dissenting from denial of rehearing en banc).

Stuart Minor Benjamin, ‘Algorithms and Speech’ (2013) 161 U Penn L Rev 1445.

Frederick Schauer, ‘The Boundaries of the First Amendment: A Preliminary Exploration of Constitutional Salience’ (2004) 117 Harv L Rev 1765, 1770–1.

Compare Ashutosh Bhagwat, ‘ Sorrell v IMS Health : Details, Detailing, and the Death of Privacy’ (2012) 36 Vt L Rev 855 (arguing against generally treating data flows as protected speech) with Jane Bambauer, ‘Is Data Speech?’ (2014) 66 Stan L Rev 57 (arguing in favor of generally treating data flows as protected speech).

  Sorrell v IMS Health, Inc , 564 US 552, 570 (2011).

  Reno v American Civil Liberties Union , 521 US 844 (1997).

Evgeny Morozov, The Net Delusion: The Dark Side of Internet Freedom (Public Affairs 2011).

Derek E Bambauer, ‘Orwell’s Armchair’ (2012) 79 U Chi L Rev 863.

Nunziato (n 24) 110–13.

Rebecca Tushnet, ‘Power Without Responsibility: Intermediaries and the First Amendment’ (2008) 76 Geo Wash L Rev 986.

‘Freedom on the Net 2017: Manipulating Social Media to Undermine Democracy’ (2017) Freedom House < https://freedomhouse.org/report/freedom-net/freedom-net-2017 > accessed 28 August 2018 (documenting suppression of Internet speech rights in numerous countries).

John Herrman, ‘How Hate Groups Forced Online Platforms to Reveal Their True Nature’ ( New York Times Magazine , 21 August 2017) < https://www.nytimes.com/2017/08/21/magazine/how-hate-groups-forced-online-platforms-to-reveal-their-true-nature.html > accessed 28 August 2018.

  Matal v Tam , 137 S Ct 1744 (2017) (striking down a prohibition on federal registration of derogatory trademarks).

Charter of Fundamental Rights of the European Union [2012] OJ 326, title II, art 11.

Ibid, title II, art 12.

Ibid, title I, art 1.

Ibid, title II, arts 7, 8.

Ibid, title III.

European Commission Press Release IP/16/1937: European Commission and IT Companies Announce Code of Conduct on Illegal Online Hate Speech ( European Commission , 31 May 2016) < http://europa.eu/rapid/press-release_IP-16-1937_en.htm > accessed 28 August 2018.

  Act to Improve Enforcement of the Law in Social Networks (Network Enforcement Act) 2017.

  Public Prosecutor v Maffais , case no 54946/2016 (27 December 2016).

  Criminal Code Act 1995 (Cth) sch 1 pt 10.6 div 474.17.

  Racial Discrimination Act 1975 (Cth) s 18C.

  Jones v Toben (2002) 71 ALD 629.

  Eatock v Bolt (2011) 197 FCR 261.

  Warman v Winnicki (2006) CarswellNat 6178; Warman v Northern Alliance (2009) CarswellNat 581.

  Citron v Zundel (2002) CarswellNat 4364.

  Whatcott v Saskatchewan Human Rights Tribunal [2013] 1 SCR 467.

‘The Year in Hate: Trump Buoyed White Supremacists in 2017, Sparking Backlash among Black Nationalist Groups’ (Southern Poverty Law Center, 21 February 2018) < https://www.splcenter.org/news/2018/02/21/year-hate-trump-buoyed-white-supremacists-2017-sparking-backlash-among-black-nationalist > accessed 28 August 2018. The Southern Poverty Law Center study includes a section, ‘How Tech Supports Hate’, which lists payment, hosting, and advertising services that Internet companies provide to specific hate groups. See < https://www.splcenter.org/hate-and-tech >.

Diana Lee, ‘Germany’s NetzDG and the Threat to Online Free Speech’ ( Media Freedom & Information Access Clinic , 10 October 2017) < https://law.yale.edu/mfia/case-disclosed/germanys-netzdg-and-threat-online-free-speech > accessed 28 August 2018.

Neil Richards, Intellectual Privacy: Rethinking Civil Liberties in the Digital Age (OUP 2015).

Alex Mathews and Catherine Tucker, ‘Government Surveillance and Internet Search Behavior’ (29 April 2015) < https://www.sebastianwendt.de/wp-content/uploads/2015/06/Government-Surveillance-and-Internet-Search-Behavior.pdf > accessed 28 August 2018.

Elizabeth Stoycheff, ‘Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring’ (2016) 93 Journalism and Mass Comm Q 296.

For one example, describing Tunisia’s use of Internet surveillance to punish political dissent, see Katherine Maher and Jillian C York, ‘Origins of the Tunisian Internet’ in Muzammil M Hussain and Philip N Howard (eds), State Power 2.0: Authoritarian Entrenchment and Political Engagement Worldwide (Routledge 2016).

Spencer Ackerman and James Ball, ‘Optic Nerve: Millions of Webcam Images Intercepted by GCHQ’ ( The Guardian , 28 February 2014) < https://www.theguardian.com/world/2014/feb/27/gchq-nsa-webcam-images-internet-yahoo > accessed 28 August 2018.

Julian Borger, ‘GCHQ and European Spy Agencies Worked Together on Mass Surveillance’ ( The Guardian , 1 November 2013) < https://www.theguardian.com/uk-news/2013/nov/01/gchq-europe-spy-agencies-mass-surveillance-snowden > accessed 28 August 2018.

Greg Weston, ‘CSEC Used Airport Wi-Fi to Track Canadian Travellers: Edward Snowden Documents’ ( CBC , 30 January 2014) < http://www.cbc.ca/news/politics/csec-used-airport-wi-fi-to-track-canadian-travellers-edward-snowden-documents-1.2517881 > accessed 28 August 2018.

Timothy H Edgar, Beyond Snowden: Privacy, Mass Surveillance, and the Struggle to Reform the NSA (Brookings Institution P 2017).

Glenn Greenwald and Ewen MacAskill, ‘NSA Prism Program Taps in to User Data of Apple, Google and Others’ ( The Guardian , 7 June 2013) < https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data > accessed 28 August 2018.

Carol Cadwalladr and Emma Graham-Harrison, ‘Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach’ ( The Guardian , 17 March 2018) < https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election > accessed 28 August 2018.

Didier Bigo and others, ‘Mass Surveillance of Personal Data by EU Member States and Its Compatibility With EU Law’ (2013) CEPS Paper in Liberty and Security in Europe < https://www.ceps.eu/publications/mass-surveillance-personal-data-eu-member-states-and-its-compatibility-eu-law > accessed 28 August 2018.

Stephen P Mulligan, ‘Cross-Border Data Sharing Under the CLOUD Act’ (2018) Congressional Research Service < https://fas.org/sgp/crs/misc/R45173.pdf > accessed 28 August 2018.

Meg Leta Jones, Ctrl + Z: The Right to Be Forgotten (NYU P 2016).

Case C-131/12, ECLI:EU:C:2014:317 [13 May 2014].

Michael J Kelly and David Satolam, ‘The Right to Be Forgotten’ (2017) U Ill L Rev 1, 38–9.

Ibid 17–20.

Edward Lee, ‘The Right to Be Forgotten v Free Speech’ (2015) 12 I/S: A J Law & Pol’y Inform Soc’y 85, 85–7.

Jack M Balkin, ‘Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society’ (2004) 79 NYU L Rev 1.

Jessica Litman, Digital Copyright: Protecting Intellectual Property on the Internet (Prometheus Books 2000).

See Creative Commons < https://creativecommons.org/ > accessed 28 August 2018.

Communications Decency Act 47 USC § 230(c)(1) (1996).

Digital Millennium Copyright Act 17 USC § 512 (2012).

The US government’s restriction of online speech to protect copyright, as distinct from other countervailing regulatory interests, reflects the US Supreme Court’s categorical view that copyright protections do not violate the First Amendment. See Eldred v Ashcroft , 537 US 186 (2003).

  Copyright Amendment (Online Infringement) Act 2015 (Cth).

  Google Inc v Equustek Solutions Inc [2017] 1 SCR 824.

Case C-360/13 [2014] ECLI:EU:C:2014:1195 [5 June 2014].

Thomas F Cotter, ‘Pragmatism, Economics, and the Droit Moral’ (1997) 76 NC L Rev 1.

Ben H Bagdikian, The New Media Monopoly (Beacon P 2004).

Stuart Allan and Einar Thorsen (eds), Citizen Journalism: Global Perspectives (Peter Lang 2009); Melissa Wall (ed), Citizen Journalism: Valuable, Useless, or Dangerous? (International Debate Education Association 2012).

Magarian (n 19).

Courtney C Radsch, ‘Unveiling the Revolutionaries: Cyberactivism and the Role of Women in the Arab’ Uprisings (2012) James A. Baker III Institute for Public Policy, Rice University < https://www.bakerinstitute.org/media/files/news/130a8d9a/ITP-pub-CyberactivismAndWomen-051712.pdf > accessed 28 August 2018.

‘The Uneven State of Poverty Coverage Over the Last Decade’ ( Spotlight on Poverty and Opportunity ) < https://tfreedmanconsulting.com/wp-content/uploads/2017/10/Spotlight_The-Uneven-State-of-Poverty-Coverage-Over-the-Past-Decade_Final_20171023.pdf > accessed 28 August 2018 (study by an online poverty news site, documenting and criticizing the mainstream US media’s limited coverage of poverty-related news).

Luke Goode, ‘Social News, Citizen Journalism, and Democracy’ (2009) 11 New Media & Soc’y 1287.

Blake D Morant, ‘Democracy, Choice, and the Importance of Voice in Contemporary Media’ (2004) 53 DePaul L Rev 943, 951 n 23 (compiling ethical codes from US news outlets).

Wall (n 91).

Alexis C Madrigal, ‘What Facebook Did to American Democracy’ ( The Atlantic , 12 October 2017) < https://www.theatlantic.com/technology/archive/2017/10/what-facebook-did/542502/ > accessed 28 August 2018.

Cass R Sunstein, Republic.com 2.0 (Princeton UP 2007) 1–18 (describing and critiquing the phenomenon of the ‘Daily Me’).

Felicia Wu Song, Virtual Communities: Bowling Alone, Online Together (Peter Lang 2009).

Eli Pariser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (Penguin 2011).

Joshua Landy, ‘Can We Have Our Truth Back, Please?’ ( Philosophy Talk , 10 September 2017) < https://www.philosophytalk.org/blog/can-we-have-our-truth-back-please > accessed 28 August 2018.

Michela Del Vicario and others, ‘Mapping Social Dynamics on Facebook: The Brexit Debate’ (2017) 50 Soc Networks 6 (showing social media’s polarizing effect on UK voters’ views of the Brexit referendum).

John Keegan, ‘Blue Feed, Red Feed’ ( The Wall Street Journal , 18 May 2016) < http://graphics.wsj.com/blue-feed-red-feed/ > accessed 28 August 2018 (showing stark differences in US social media users’ exposure to information depending on their ideological identities).

Morozov (n 38); ‘Freedom on the Net 2017’ (n 42) (identifying social media disinformation campaigns by Russia, China, and other governments).

Sheera Frenkel and Katie Benner, ‘To Stir Discord in 2016, Russians Turned Most Often to Facebook’ ( The New York Times , 17 February 2018) < https://www.nytimes.com/2018/02/17/technology/indictment-russian-tech-facebook.html > accessed 28 August 2018.

Paul Lewis, ‘“Fiction Is Outperforming Reality”: How YouTube’s Algorithm Distorts Truth’ ( The Guardian , 2 February 2018) < https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth > accessed 28 August 2018.

Jennifer A Chandler, ‘A Right to Reach an Audience: An Approach to Intermediary Bias on the Internet’ (2007) 35 Hofstra L Rev 1095, 1112–5; Nunziato (n 24) 12–17.

Jennifer Rankin, ‘Tech Firms Could Face New EU Regulations over Fake News’ ( The Guardian , 24 April 2018) < https://www.theguardian.com/media/2018/apr/24/eu-to-warn-social-media-firms-over-fake-news-and-data-mining > accessed 28 August 2018.

Steven T Dennis, ‘Senators Propose Social-Media Ad Rules after Months of Russia Probes’ ( Bloomberg , 19 October 2017) < https://www.bloomberg.com/news/articles/2017-10-19/russia-probes-spur-lawmakers-on-election-security-social-media > accessed 28 August 2018.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Freedom of expression in the Digital Age: Internet Censorship

  • Living reference work entry
  • First Online: 08 May 2020
  • Cite this living reference work entry

freedom of speech on the internet is an example of

  • Md Nurul Momen 4  

309 Accesses

Freedom of expression includes freedom to hold opinions and ideas and to receive and impart information without restrictions by state authorities.

Introduction

Internet is regarded as an important issue that shapes free expression in today’s volatile nature of human rights world (Momen 2020 ). In the digital age, authoritarian governments in the world always attempt to undermine political and social movement through the complete shutdown of the Internet or providing partial access to it. It is also found that the restrictions on freedom of expression on the Internet are through surveillance and monitoring the online activities. In response to any kind of political and social movement, authoritarian governments across the border occasionally shut down many websites, along with the arrest of several anti-government bloggers and political activists. However, under the international legal instruments, for instance, Universal Declaration of Human Rights (UDHR), denial of the...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Ariffin, L. J. (2012). Rais backs Dr M call for curbs to Internet freedom . https://www.malaysia-today.net/2012/06/05/rais-backs-dr-m-call-for-curbs-to-internet-freedom/ . Accessed 10 June 2018.

Arnaudo, D., Alva, A., Wood, P., & Whittington, J. (2013). Political and economic implications of authoritarian control of the internet. In J. Butts & S. Shenoi (Eds.), Critical infrastructure protection VII (IFIP AICT) (Vol. 417, pp. 3–19). Berlin, Heidelberg: Springer.

Google Scholar  

Cristiano, F. (2019). Internet access as human right: A dystopian critique from the occupied Palestinian territory. In G. Blouin-Genest, M. C. Doran, & S. Paquerot (Eds.), Human rights as battlefields (Human rights interventions). Cham: Palgrave Macmillan. https://doi.org/10.1007/978-3-319-91770-2_12 .

Chapter   Google Scholar  

Diamond, L. (2010). Liberation technology. Journal of Democracy, 21 (3), 69–83. https://doi.org/10.1353/jod.0.0190 .

Article   Google Scholar  

Freedom House. (2019). Freedom on the Net . Washington DC/New York, Retrieved from https://www.freedomonthenet.org/countries-in-detail

Hill, D. T. (2002). East Timor and the Internet: Global political leverage in/on Indonesia. Indonesia, 73 , 25–51.

Kee, J. S. (2012). Bad laws won’t stop cyber crime . https://www.loyarburok.com/2012/05/28/bad-laws-stop-cyber-crime/?doing_wp_cron . Accessed 10 June 2019.

Momen, M. N. (2020). Myth and reality of freedom of expression on the Internet. International Journal of Public Administration, 43 (3), 277–281. https://doi.org/10.1080/01900692.2019.1628055 .

Nocetti, J. (2015). Contest and conquest: Russia and global Internet governance. International Affairs, 91 (1), 111–130. https://doi.org/10.1111/1468-2346.12189 .

Randall, J. (1996). Of cracks and crackdown: Five translations of recent Internet postings. Indonesia, 62 , 37–51.

Rodan, G. (1998). The Internet and political control in Singapore. Political Science Quarterly, 113 (1), 63–89.

Shirokanova, A., & Silyutina, O. (2018). Internet regulation: A text-based approach to media coverage. In D. A. Alexandrov et al. (Eds.), Digital Transformation and Global Society (DTGS) 2018 (Communications in computer and information science (CCIS)) (Vol. 858, pp. 181–194). Cham: Springer. https://doi.org/10.1007/978-3-030-02843-5_15 .

Ziccardi, G. (2013). Digital activism, internet control, transparency, censorship, surveillance and human rights: An international perspective. In Resistance, liberation technology and human rights in the digital age (Law, governance and technology series) (Vol. 7). Dordrecht: Springer. https://doi.org/10.1007/978-94-007-5276-4_6 .

Download references

Author information

Authors and affiliations.

Department of Public Administration, University of Rajshahi, Rajshahi, Bangladesh

Md Nurul Momen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Md Nurul Momen .

Editor information

Editors and affiliations.

University of Alberta, Alberta, AB, Canada

Scott Romaniuk

University for Peace, San Jose, Costa Rica

Manish Thapa

Nemzetkozi Tanulmanyok Intezet, Rm 503, Corvinus Univ, Inst of Intl Studies, Budapest, Hungary

PĂ©ter Marton

Rights and permissions

Reprints and permissions

Copyright information

© 2019 The Author(s), under exclusive licence to Springer Nature Switzerland AG

About this entry

Cite this entry.

Momen, M.N. (2019). Freedom of expression in the Digital Age: Internet Censorship. In: Romaniuk, S., Thapa, M., Marton, P. (eds) The Palgrave Encyclopedia of Global Security Studies. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-74336-3_31-1

Download citation

DOI : https://doi.org/10.1007/978-3-319-74336-3_31-1

Received : 15 March 2018

Accepted : 29 June 2019

Published : 08 May 2020

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-319-74336-3

Online ISBN : 978-3-319-74336-3

eBook Packages : Springer Reference Political Science and International Studies Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

freedom of speech on the internet is an example of

3 Freedom of expression and the Internet

The Internet has opened up new possibilities for the realisation of the right to freedom of expression. This is due to the Internet’s unique characteristics, including ‘its speed, worldwide reach and relative anonymity’. [9] These distinctive features have enabled individuals to use the Internet to disseminate information in ‘real time’, and to mobilise people. [10] The United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (Special Rapporteur) asserts that:

Unlike any other medium the Internet facilitated the ability of individuals to seek, receive and impart information and ideas of all kinds instantaneously and inexpensively across national borders. By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an ‘enabler’ of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole. [11]

Insofar as freedom of expression is concerned, the Internet presents a compelling platform for the decentralising of information and of institutional control – at its best it acts as a leveller to access to knowledge.

However, as the Special Rapporteur acknowledges, ‘like all technological inventions, the Internet can be misused to cause harm to others. [12]

3.1 Freedom of expression in human rights theory

The right to freedom of expression is deeply rooted in historical thought and underpinned by a number of largely interdependent rationales. Of these is the ‘truth rationale’ where ‘true opinion’ can be identified, and ‘false ideas’ exposed through criticism – a process facilitated by a free-flowing ‘marketplace of ideas’. [13]

The ‘democratic rationale’ identifies freedom of expression as necessary for the functioning of a truly representative government. [14] The HRC has emphasised the importance of press and media freedom for a democratic society:

A free, uncensored and unhindered press or other media is essential in any society to ensure freedom of opinion and expression and the enjoyment of other Covenant rights. It constitutes one of the cornerstones of a democratic society. ... The free communication of information and ideas about public and political issues between citizens, candidates and elected representatives is essential. This implies a free press and other media able to comment on public issues without censorship or restraint and to inform public opinion. The public also has a corresponding right to receive media output. [15]

A core rationale for freedom of expression is the ‘self-determination rationale’, in which free speech is conceived of as an aspect of self-realisation and individual autonomy. [16] The ability to relate our thoughts and experiences is seen as an intrinsic part of being human, and therefore restrictions on this ability are viewed as inhibiting both individual autonomy and the ability to attain self-fulfilment.

In this vein, the HRC has also noted that freedom of information and expression, while central to democratic governance, is not restricted to political information and expression; it

includes the expression and receipt of communications of every form of idea and opinion capable of transmission to others , subject to the provisions in article 19, paragraph 3, and article 20. It includes political discourse, commentary on one’s own and on public affairs, canvassing, discussion of human rights, journalism, cultural and artistic expression, teaching, and religious discourse. It may also include commercial advertising. [17]

Accordingly, the right to freedom of expression has been described as an ‘enabler of other rights’ such as economic, social and cultural rights (i.e. rights to education and to take part in cultural life) as well as civil and political rights (i.e. rights to freedom of association and assembly). [18]

3.2 Freedom of expression and information in Australian law

In Australia there is no express Constitutional or legislative protection of the freedom of expression at the federal level (in contrast to human rights legislation in force in the ACT and Victoria), [19] Despite this, the courts have an important role in interpreting legislation consistently with human rights where possible. [20]

Although not expressly protected at a federal level, freedom of expression does enjoy some implied and residual [21] protection. The Australian High Court has held that an implied freedom of political communication ‘is an indispensable incident of the system of representative government which the Constitution creates’. [22]

The freedom of political communication found by the High Court to be implicit in the Constitution is unlikely to have the same breadth of subject matter as article 19(2) of the ICCPR, insofar as the latter goes beyond political matters. However, the very fact of restrictions being placed on freedom of expression on other subjects – including on grounds such as decency - may in some instances itself give the restricted or prohibited expression the status of political communication.

A number of potential restrictions on the right to freedom of expression are contemplated by Australian laws, including in laws on sedition; [23] national security; [24] telecommunications; [25] racial hatred; [26] copyright; [27] defamation; [28] perjury; [29] contempt of court; [30] fraud; [31] privacy, [32] and censorship in classification and broadcasting. [33]

A number of these laws are based on valid grounds for restriction referred to in article 19(3) of the ICCPR. However, questions remain as to whether some of these laws would meet the levels of transparency and proportionality required by article 19(3).

These questions raise broader concerns about censorship and the Internet. In particular, the Special Rapporteur notes the use of arbitrary blocking or filtering of content where such mechanisms are used to regulate and censor information on the Internet, with multi-layered controls that are often hidden from the public. [34] An example of such a system close to home was the Australian Government’s now discontinued mandatory Internet filtering proposal. This attracted wide-ranging criticism as providing broad and imprecisely defined parameters on what constituted ‘refused classification’ materials, resulting in websites being captured by the filter which were described by critics of the proposal as relatively innocuous. [35]

As the Special Rapporteur points out, excessive censoring can occur where the specific conditions that justify blocking are not established in law or are legislated for in an ‘overly broad and vague manner’. [36] In addition, even where justification for blocking exists, blocking measures may constitute a disproportionate means to achieving the purported aim, and content may frequently be blocked without the possibility of judicial or independent review. [37] This situation requires the balancing of freedom of expression against other rights and considerations that should be taken into account in achieving the appropriate balance.

3.3 Right to freedom of expression and information in human rights instruments

‘Human rights’ for the purposes of the Commission’s work include the rights and freedoms recognised in the ICCPR, including the right to freedom of expression and information in article 19. As discussed on the Commission’s webpage on the right to freedom of information, opinion and expression, [38] this right is also recognised and expanded on in the Convention on the Rights of the Child (CRC) [39] and the Convention on the Rights of Persons with Disabilities . [40] Freedom of expression and information is also recognised in article 19 of the Universal Declaration of Human Rights . [41]

The following discussion will focus on the right to freedom of expression as recognised by article 19 of the ICCPR.

[9] F La Rue, Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression , Report to the Human Rights Council, 17 th session, UN Doc A/HRC/17/27 (2011), p 7. At http://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/Annual.aspx (viewed 27 August 2013.

[10] F La Rue, above, p 7.

[11] F La Rue, above, p 19.

[12] F La Rue, above.

[13] D Rolph, M Vittins and J Bannister, Media Law: Cases, Materials and Commentary (2010), pp 23-26.

[14] D Rolph, M Vittins and J Bannister, above, p 23.

[15] Human Rights Committee, General Comment No. 34 , note 4, para 13.

[16] D Rolph, M Vittins and J Bannister, note 13, p 23.

[17] Human Rights Committee, General Comment No. 34 , note 4, para 11 (emphasis added).

[18] F La Rue, note 9, p 7.

[19] See the Human Rights Act 2004 (ACT) and Charter of Human Rights and Responsibilities Act 2006 (Vic).

[20] See the Commission’s page Common law rights and human rights scrutiny for more discussion: http://www.humanrights.gov.au/common-law-rights-and-human-rights-scrutiny .

[21] See Brown v Classification Review Board (1997) 154 ALR 67, in which French J stated (at 76): ‘A person may say and write anything he pleases except in so far as he may not’. See also Lange v Australian Broadcasting Corporation (1997) 189 CLR 520, in which the High Court stated (at 567): ‘Within our legal system, communications are free only to the extent that they are left unburdened by the laws that comply with the Constitution’.

[22] Lange v Australian Broadcasting Corporation (1997) 189 CLR 520, 599. See Nationwide News Pty Ltd v Wills (1992) 177 CLR 1; Australian Capital Television Pty Ltd & New South Wales v Commonwealth (1992) 177 CLR 106, and the discussion in D Rolph, M Vittins, J Bannister, note 12, pp 32-43.

[23] See the ‘urging violence’ offences in ss 80.2 – 80.2B of the Criminal Code Act 1995 (Cth).

[24] See, for example, the restrictions which may be placed on communication by certain individuals who are made the subject of control orders or preventative detention orders: Criminal Code Act 1995 (Cth) s 104.5(3)(e) and ss 105.15, 105.16, and 105.34.

[25] See the offences in Part 10.6, Div 474, Sub-div C of the Criminal Code Act 1995 (Cth).

[26] See, for example, Racial Discrimination Act 1975 (Cth) s 18C; Anti-Discrimination Act 1977 (NSW) s 20C; Racial and Religious Tolerance Act 2001 (Vic) ss 7 and 8.

[27] See the Copyright Act 1968 (Cth).

[28] See the discussion in N O’Neill, S Rice and R Douglas, Retreat From Injustice: Human Rights Law in Australia (2 nd ed, 2004), Chapter 17.

[29] See, for example, Crimes Act 1900 (NSW) s 327.

[30] See the discussion in N O’Neill, S Rice and R Douglas, note 28, Chapter 16, particularly the section entitled ‘Contempt by Criticising or “Scandalising” the Courts’.

[31] See, for example, Crimes Act 1900 (NSW) s 192G.

[32] See the Privacy Act 1988 (Cth).

[33] See for example Classification (Publications, Films and Computer Games) Act 1995 (Cth) and the Broadcasting Services Act 1992 (Cth).

[34] F La Rue, note 9, p 9.

[35] A Moses, ‘Filter was white elephant waiting to happen’, The Sydney Morning Herald , 9 November 2012. At http://www.smh.com.au/technology/technology-news/filter-was-white-elephant-waiting-to-happen-20121109-2923o.html (viewed 27 August 2013).

[36] F La Rue, note 9, p 10.

[37] F La Rue, above.

[38] See http://www.humanrights.gov.au/right-freedom-information-opinion-and-expression#other .

[39] Opened for signature 20 November 1989, 1577 UNTS 3 (entered into force 2 September 1990) (CRC). At http://www.austlii.edu.au/au/other/dfat/treaties/1991/4.html (viewed 27 August 2013).

[40] Opened for signature 30 March 2007, 2515 UNTS 3 (entered into force 3 May 2008). At http://www.austlii.edu.au/au/other/dfat/treaties/ATS/2008/12.html (viewed 27 August 2013).

[41] Universal Declaration of Human Rights , UN General Assembly Resolution 217A(III), UN Doc A/810, 71 (UDHR) (1948). At http://www.un.org/en/documents/udhr/ (viewed 27 August 2013).

By William Fisher

Last Updated June 14, 2001

Table of Contents
Introduction The Internet offers extraordinary opportunities for "speakers," broadly defined.  Political candidates, cultural critics, corporate gadflies -- anyone who wants to express an opinion about anything -- can make their thoughts available to a world-wide audience far more easily than has ever been possible before.  A large and growing group of Internet participants have seized that opportunity. Some observers find the resultant outpouring of speech exhilarating.  They see in it nothing less than the revival of democracy and the restoration of community.  Other observers find the amount -- and, above all, the kind of speech -- that the Internet has stimulated offensive or frightening.  Pornography, hate speech, lurid threats -- these flourish alongside debates over the future of the Democratic Party and exchanges of views concerning flyfishing in Patagonia.  This phenomenon has provoked various efforts to limit the kind of speech in which one may engage on the Internet -- or to develop systems to "filter out" the more offensive material. This module examines some of the legal issues implicated by the increasing bitter struggle between the advocates of "free speech" and the advocates of filtration and control.     Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources   Background Before plunging into the details of the proliferating controversies over freedom of expression on the Internet, you need some background information on two topics. The first and more obvious is the Free-Speech Clause of the First Amendment to the United States Constitution. The relevance and authority of the First Amendment should not be exaggerated; as several observers have remarked, "on the Internet, the First Amendment is just a local ordinance."  However, free-expression controversies that arise in the United States inevitably implicate the Constitution. And the arguments deployed in the course of American First-Amendment fights often inform or infect the handling of free-expression controversies in other countries. The upshot: First-Amendment jurisprudence is worth studying. Unfortunately, that jurisprudence is large and arcane. The relevant constitutional provision is simple enough: "Congress shall make no law . . . abridging the freedom of speech, or of the press . . .."  But the case law that, over the course of the twentieth century, has been built upon this foundation is complex. An extremely abbreviated outline of the principal doctrines would go as follows:   If a law gives no clear notice of the kind of speech it prohibits, it’s "void for vagueness." If a law burdens substantially more speech than is necessary to advance a compelling government interest, it’s unconstitutionally "overbroad." A government may not force a person to endorse any symbol, slogan, or pledge. Governmental restrictions on the "time, place, and manner" in which speech is permitted are constitutional if and only if: they are "content neutral," both on their face and as applied; they leave substantial other opportunities for speech to take place; and they "narrowly serve a significant state interest." On state-owned property that does not constitute a "public forum," government may restrict speech in any way that is reasonable in light of the nature and purpose of the property in question. Content-based governmental restrictions on speech are unconstitutional unless they advance a "compelling state interest."  To this principle, there are six exceptions: 1.  Speech that is likely to lead to imminent lawless action may be prohibited. 2. "Fighting words" -- i.e., words so insulting that people are likely to fight back -- may be prohibited. 3.  Obscenity -- i.e., erotic expression, grossly or patently offensive to an average person, that lacks serious artistic or social value -- may be prohibited. 4.  Child pornography may be banned whether or not it is legally obscene and whether or not it has serious artistic or social value, because it induces people to engage in lewd displays, and the creation of it threatens the welfare of children. 5.  Defamatory statements may be prohibited.  (In other words, the making of such statements may constitutionally give rise to civil liability.)  However, if the target of the defamation is a "public figure," she must prove that the defendant acted with "malice."  If the target is not a "public figure" but the statement involved a matter of "public concern," the plaintiff must prove that the defendant acted with negligence concerning its falsity. 6. Commercial Speech may be banned only if it is misleading, pertains to illegal products, or directly advances a substantial state interest with a degree of suppression no greater than is reasonably necessary.  

If you are familiar with all of these precepts -- including the various terms of art and ambiguities they contain -- you're in good shape. If not, you should read some more about the First Amendment.  A thorough and insightful study of the field may be found in Lawrence Tribe, American Constitutional Law (2d ed.), chapter 12.  Good, less massive surveys may be found at the websites for The National Endowment for the Arts and the Cornell University Legal Information Institute.

The second of the two kinds of background you might find helpful is a brief introduction to the current debate among academics over the character and desirability of what has come to be called "cyberdemocracy."  Until a few years ago, many observers thought that the Internet offered a potential cure to the related diseases that have afflicted most representative democracies in the late twentieth century:  voter apathy; the narrowing of the range of political debate caused in part by the inertia of a system of political parties; the growing power of the media, which in turn seems to reduce discussion of complex issues to a battle of "sound bites"; and the increasing influence of private corporations and other sources of wealth.  All of these conditions might be ameliorated, it was suggested, by the ease with which ordinary citizens could obtain information and then cheaply make their views known to one another through the Internet.

A good example of this perspective is a recent article by Bernard Bell , where he suggests that “[t]he Internet has, in many ways, moved society closer to the ideal Justice Brennan set forth so eloquently in New York Times v. Sullivan .  It has not only made debate on public issues more 'uninhibited, robust, and wide-open,' but has similarly invigorated discussion of non-public issues. By the same token, the Internet has empowered smaller entities and even individuals, enabling them to widely disseminate their messages and, indeed, reach audiences as broad as those of established media organizations.”

Recently, however, this rosy view has come under attack.  The Internet, skeptics claim, is not a giant "town hall."  The kinds of information flows and discussions it seems to foster are, in some ways, disturbing.  One source of trouble is that the Internet encourages like-minded persons (often geographically dispersed) to cluster together in bulletin boards and other virtual clubs.  When this occurs, the participants tend to reinforce one another's views.  The resultant "group polarization" can be ugly.  More broadly, the Internet seems at least potentially corrosive of something we have long taken for granted in the United States: a shared political culture.  When most people read the same newspaper or watch the same network television news broadcast each day, they are forced at least to glance at stories they might fight troubling and become aware of persons and groups who hold views sharply different from their own.  The Internet makes it easy for people to avoid such engagement -- by enabling people to select their sources of information and their conversational partners.  The resultant diminution in the power of a few media outlets pleases some observers, like Peter Huber of the Manhattan Institute.  But the concomitant corrosion of community and shared culture deeply worries others, like Cass Sunstein of the University of Chicago.

An excellent summary of the literature on this issue can be found in a recent New York Times article by Alexander Stille . If you are interested in digging further into these issues, we recommend the following books:

  • Cass Sunstein, Republic.com (Princeton Univ. Press 2001)
  • Peter Huber, Law and Disorder in Cyberspace: Abolish the F.C.C. and Let Common Law Rule the Telecosm (Oxford Univ. Press 1997)

To test some of these competing accounts of the character and potential of discourse on the Internet, we suggest you visit - or, better yet, participate in - some of the sites at which Internet discourse occurs. Here's a sampler:

  • MSNBC Political News Discussion Board

Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources

Current Controversies

1.  Restrictions on Pornography

Three times in the past five years, critics of pornography on the Internet have sought, through federal legislation, to prevent children from gaining access to it.  The first of these efforts was the Communications Decency Act of 1996 (commonly known as the "CDA"), which (a) criminalized the "knowing" transmission over the Internet of "obscene or indecent" messages to any recipient under 18 years of age and (b) prohibited the "knowin[g]" sending or displaying to a person under 18 of any message "that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs."  Persons and organizations who take "good faith, . . . effective . . . actions" to restrict access by minors to the prohibited communications, or who restricted such access by requiring certain designated forms of age proof, such as a verified credit card or an adult identification number, were exempted from these prohibitions.

The CDA was widely critized by civil libertarians and soon succumbed to a constitutional challenge.  In 1997, the United States Supreme Court struck down the statute, holding that it violated the First Amendment in several ways:

  • because it restricted speech on the basis of its content, it could not be justified as a "time, place, and manner" regulation;
  • its references to "indecent" and "patently offensive" messages were unconstitutionally vague;
  • its supposed objectives could all be achieved through regulations less restrictive of speech;
  • it failed to exempt from its prohibitions sexually explicit material with scientific, educational, or other redeeming social value.

Two aspects of the Court's ruling are likely to have considerable impact on future constitutional decisions in this area.  First, the Court rejected the Government's effort to analogize the Internet to traditional broadcast media (especially television), which the Court had previously held could be regulated more strictly than other media.  Unlike TV, the Court reasoned, the Internet has not historically been subject to extensive regulation, is not characterized by a limited spectrum of available frequencies, and is not "invasive."  Consequently, the Internet enjoys full First-Amendment protection.  Second, the Court encouraged the development of technologies that would enable parents to block their children's access to Internet sites offering kinds of material the parents deemed offensive.

A year later, pressured by vocal opponents of Internet pornography -- such as "Enough is Enough" and the National Law Center for Children and Families -- Congress tried again.  The 1998 Child Online Protection Act (COPA) obliged commercial Web operators to restrict access to material considered "harmful to minors" -- which was, in turn, defined as any communication, picture, image, graphic image file, article, recording, writing or other matter of any kind that is obscene or that meets three requirements:

(1) "The average person, applying contemporary community standards, would find, taking the material as a whole and with respect to minors, is designed to appeal to, or is designed to pander to, the prurient interest." (2) The material "depicts, describes, or represents, in a manner patently offensive with respect to minors, an actual or simulated sexual act or sexual conduct, an actual or simulated normal or perverted sexual act or a lewd exhibition of the genitals or post-pubescent female breast." (3) The material, "taken as a whole, lacks serious literary, artistic, political, or scientific value for minors."  

Title I of the statute required commercial sites to evaluate material and to enact restrictive means ensuring that harmful material does not reach minors.  Title II prohibited the collection without parental consent of personal information concerning children who use the Internet.  Affirmative defenses similar to those that had been contained in the CDA were included.

Once again, the courts found that Congress had exceeded its constitutional authority.  In the judgment of the Third Circuit Court of Appeals , the critical defect of COPA was its reliance upon the criterion of "contemporary community standards" to determine what kinds of speech are permitted on the Internet:

Because material posted on the Web is accessible by all Internet users worldwide, and because current technology does not permit a Web publisher to restrict access to its site based on the geographic locale of a each particular Internet user, COPA essentially requires that every Web publisher subject to the statute abide by the most restrictive and conservative state's community standard in order to avoid criminal liability.  

The net result was to impose burdens on permissible expression more severe than can be tolerated by the Constitution.  The court acknowledged that its ruling did not leave much room for constitutionally valid restrictions on Internet pornography:

We are forced to recognize that, at present, due to technological limitations, there may be no other means by which harmful material on the Web may be constitutionally restricted, although, in light of rapidly developing technological advances, what may now be impossible to regulate constitutionally may, in the not-too-distant future, become feasible.  

In late 2000, the anti-pornography forces tried once more.  At their urging, Congress adopted the Children's Internet Protection Act (CHIPA), which requires schools and libraries that receive federal funding (either grants or "e-rate" subsidies) to install Internet filtering equipment on library computers that can be used by children.  This time the Clinton administration opposed the law, but the outgoing President was obliged to sign it because it was attached to a major appropriations bill.

Opposition to CHIPA is intensifying.  Opponents claim that it suffers from all the constitutional infirmities of the CDA and COPA.  In addition, it will reinforce one form of the "digital divide" -- by subjecting poor children, who lack home computers and must rely upon public libraries for access to the Internet, to restrictions that more wealthy children can avoid.  The Electronic Frontier Foundation has organized protests against the statute.   In April of this year, several civil-liberties groups and public library associations filed suit in the Eastern District of Pennsylvania seeking a declaration that the statute is unconstitutional.  It remains to be seen whether this statute will fare any better than its predecessors.

The CDA, COPA, and CHIPA have one thing in common: they all involve overt governmental action -- and thus are subject to challenge under the First Amendment.  Some observers of the Internet argue that more dangerous than these obvious legislative initiatives are the efforts by private Internet Service Providers to install filters on their systems that screen out kinds of content that the ISPs believe their subscribers would find offensive.  Because policies of this sort are neither mandated nor encouraged by the government, they would not, under conventional constitutional principles, constitute "state action" -- and thus would not be vulnerable to constitutional scrutiny.  Such a result, argues Larry Lessig, would be pernicious; to avoid it, we need to revise our understanding of the "state action" doctrine.  Charles Fried disagrees:

Note first of all that the state action doctrine does not only limit the power of courts to protect persons from private power that interferes with public freedoms. It also protects individuals from the courts themselves, which are, after all, another government agency. By limiting the First Amendment to protecting citizens from government (and not from each other), the state action doctrine enlarges the sphere of unregulated discretion that individuals may exercise in what they think and say. In the name of First Amendment "values," courts could perhaps inquire whether I must grant access to my newspaper to opinions I abhor, must allow persons whose moral standards I deplore to join my expressive association, or must remain silent so that someone else gets a chance to reach my audience with a less appealing but unfamiliar message. Such inquiries, however, would place courts in the business of deciding which opinions I would have to publish in my newspaper and which would so distort my message that putting those words in my mouth would violate my freedom of speech; what an organization's associational message really is and whether forcing the organization to accept a dissenting member would distort that message; and which opinions, though unable to attract an audience on their own, are so worthy that they must not be drowned out by more popular messages. I am not convinced that whatever changes the Internet has wrought in our environment require the courts to mount this particular tiger.

"Perfect Freedom or Perfect Control," 114 Harvard Law Review 606, 635 (2000).

The United States may have led the way in seeking (unsuccessfully, thus far) to restrict the flow of pornography on the Internet, but the governments of other countries are now joining the fray.  For the status of the struggle in a few jurisdictions, you might read:

  • Joseph C. Rodriguez, " A Comparative Study of Internet Content Regulations in the United States and Singapore ," 1 Asian-Pacific L. & Pol'y J. 9 (February 2000).   (Singapore)

In a provocative recent article, Amy Adler  argues that the effort to curb child pornography online -- the kind of pornography that disgusts the most people -- is fundamentally misguided.  Far from reducing the incidence of the sexual abuse of children, governmental efforts to curtail child pornography only increase it.  A summary of her argument is available here .  The full article is available here .

2.  Threats

When does speech become a threat?  Put more precisely, when does a communication over the Internet inflict -- or threaten to inflict -- sufficient damage on its recipient that it ceases to be protected by the First Amendment and properly gives rise to criminal sanctions?  Two recent cases addressed that issue from different angles.

The first was popularly known as the "Jake Baker" case.  In 1994 and 1995, Abraham Jacob Alkhabaz, also known as Jake Baker, was an undergraduate student at the University of Michigan.  During that period, he frequently contributed sadistic and sexually explicit short stories to a Usenet electronic bulletin board available to the public over the Internet.  In one such story, he described in detail how he and a companion tortured, sexually abused, and killed a young woman, who was given the name of one of Baker's classmates.  (Excerpts from the story, as reprinted in the Court of Appeals decision in the case, are available here . WARNING: This material is very graphic in nature and may be troubling to some readers.  It is presented in order to provide a complete view of the facts of the case.)  Baker's stories came to the attention of another Internet user, who assumed the name of Arthur Gonda.  Baker and Gonda then exchanged many email messages, sharing their sadistic fantasies and discussing the methods by which they might kidnap and torture a woman in Baker's dormitory.  When these stories and email exchanges came to light, Baker was indicted for violation of 18 U.S.C. 875(c), which provides:  

Whoever transmits in interstate or foreign commerce any communication containing any threat to kidnap any person or any threat to injure the person of another, shall be fined under this title or imprisoned not more than five years, or both.  

Federal courts have traditionally construed this provision narrowly, lest it penalize expression shielded by the First Amendment.  Specifically, the courts have required that a defendant's statement, in order to trigger criminal sanctions, constitute a "true threat" -- as distinguished from, for example, inadvertent statements, hyperbole, innocuous talk, or political commentary.  Baker moved to quash the indictment on the ground that his statements on the Internet did not constitute "true threats." The District Court agreed , ruling that the class of women supposedly threatened was not identified in Baker's exchanges with Gonda with the degree of specificity required by the First Amendment and that, although Baker had expressed offensive desires, "it was not constitutionally permissible to infer an intention to act on a desire from a simple expression of desire."  The District Judge's concluding remarks concerning the character of threatening speech on the Internet bear emphasis:  

Baker's words were transmitted by means of the Internet, a relatively new communications medium that is itself currently the subject of much media attention.  The Internet makes it possible with unprecedented ease to achieve world-wide distribution of material, like Baker's story, posted to its public areas.  When used in such a fashion, the Internet may be likened to a newspaper with unlimited distribution and no locatable printing press - and with no supervising editorial control. But Baker's e-mail messages, on which the superseding indictment is based, were not publicly published but privately sent to Gonda.  While new technology such as the Internet may complicate analysis and may sometimes require new or modified laws, it does not in this instance qualitatively change the analysis under the statute or under the First Amendment.  Whatever Baker's faults, and he is to be faulted, he did not violate 18 U.S.C. § 875(c).  

Two of the three judges on the panel that heard the appeal agreed .  In their view, a violation of 875(c) requires a demonstration, first, that a reasonable person would interpret the communication in question as serious expression of an intention to inflict bodily harm and, second, that a reasonable person would perceive the communications as being conveyed "to effect some change or achieve some goal through intimidation."  Baker's speech failed, in their judgment, to rise to this level.

Judge Krupansky, the third member of the panel, dissented .  In a sharply worded opinion, he denounced the majority for compelling the prosecution to meet a standard higher that Congress intended or than the First Amendment required.  In his view, "the pertinent inquiry is whether a jury could find that a reasonable recipient of the communication would objectively tend to believe that the speaker was serious about his stated intention."  A reasonable jury, he argued, could conclude that Baker's speech met this standard -- especially in light of the fact that the woman named in the short story had, upon learning of it, experienced a "shattering traumatic reaction that resulted in recommended psychological counselling."

For additional information on the case, see Adam S. Miller, The Jake Baker Scandal: A Perversion of Logic .

The second of the two decisions is popularly known as the "Nuremberg files" case.  In 1995, the American Coalition of Life Activists (ACLA), an anti-abortion group that advocates the use of force in their efforts to curtail abortions, created a poster featuring what the ACLA described as the "Dirty Dozen," a group of doctors who performed abortions.  The posters offered "a $ 5,000 [r]eward for information leading to arrest, conviction and revocation of license to practice medicine" of the doctors in question, and listed their home addresses and, in some instances, their phone numbers.  Versions of the poster were distributed at anti-abortion rallies and later on television.  In 1996, an expanded list of abortion providers, now dubbed the "Nuremberg files," was posted on the Internet with the assistance of an anti-abortion activist named Neil Horsley.  The Internet version of the list designated doctors and clinic workers who had been attacked by anti-abortion terrorists in two ways:  the names of people who had been murdered were crossed out; the names of people who had been wounded were printed in grey.  (For a version of the Nuremberg Files web site, click here. WARNING: This material is very graphic in nature and may be disturbing to many readers.  It is presented in order to provide a complete view of the facts of the case).

The doctors named and described on the list feared for their lives.  In particular, some testified that they feared that, by publicizing their addresses and descriptions, the ACLA had increased the ease with which terrorists could locate and attack them -- and that, by publicizing the names of doctors who had already been killed, the ACLA was encouraging those attacks.

Some of the doctors sought recourse in the courts.  They sued the ACLA, twelve individual anti-abortion activists and an affiliated organization, contending that their actions violated the federal Freedom of Access to Clinic Entrances Act of 1994 (FACE), 18 U.S.C. §248, and the Racketeer Influenced and Corrupt Organizations Act (RICO), 18 U.S.C. §1962.  In an effort to avoid a First-Amendment challenge to the suit, the trial judge instructed the jury that defendants could be liable only if their statements were "true threats."  The jury, concluding that the ACLA had indeed made such true threats, awarded the plaintiffs $107 million in actual and punitive damages.  The trial court then enjoined the defendants from making or distributing the posters, the webpage or anything similar.

This past March, a panel of the Court of Appeals for the Ninth Circuit overturned the verdict , ruling that it violated the First Amendment.  Judge Kozinski began his opinion by likening the anti-abortion movement to other "political movements in American history," such as the Patriots in the American Revolution, abolitionism, the labor movement, the anti-war movement in the 1960s, the animal-rights movement, and the environmental movement.  All, he argued, have had their "violent fringes," which have lent to the language of their non-violent members "a tinge of menace."  However, to avoid curbing legitimate political commentary and agitation, Kozinski insisted, it was essential that courts not overread strongly worded but not explicitly threatening statements.  Specifically, he held that:  

Defendants can only be held liable if they "authorized, ratified, or directly threatened" violence. If defendants threatened to commit violent acts, by working alone or with others, then their statements could properly support the verdict. But if their statements merely encouraged unrelated terrorists, then their words are protected by the First Amendment.  

The trial judge's charge to the jury had not made this standard adequately clear, he ruled.  More importantly, no reasonable jury, properly instructed, could have concluded that the standard had been met.  Accordingly, the trial judge was instructed to dissolve the injunction and enter judgment for the defendants on all counts.

In the course of his opinion, Kozinski offered the following reflections on the fact that the defendants' speech had occurred in public discourse -- including the Internet:  

In considering whether context could import a violent meaning to ACLA's non-violent statements, we deem it highly significant that all the statements were made in the context of public discourse, not in direct personal communications. Although the First Amendment does not protect all forms of public speech, such as statements inciting violence or an imminent panic, the public nature of the speech bears heavily upon whether it could be interpreted as a threat.  As we held in McCalden v. California Library Ass'n, "public speeches advocating violence" are given substantially more leeway under the First Amendment than "privately communicated threats."  There are two reasons for this distinction: First, what may be hyperbole in a public speech may be understood (and intended) as a threat if communicated directly to the person threatened, whether face-to-face, by telephone or by letter. In targeting the recipient personally, the speaker leaves no doubt that he is sending the recipient a message of some sort. In contrast, typical political statements at rallies or through the media are far more diffuse in their focus because they are generally intended, at least in part, to shore up political support for the speaker's position.  Second, and more importantly, speech made through the normal channels of group communication, and concerning matters of public policy, is given the maximum level of protection by the Free Speech Clause because it lies at the core of the First Amendment.

2.  Intellectual Property

The First Amendment forbids Congress to make any law “abridging the freedom of speech.”  The copyright statute plainly interferes with certain kinds of speech: it prevents people from “publicly performing” or “reproducing” copyrighted material without permission.  In other words, several ways in which people might be inclined to “speak” have been declared by Congress illegal .  Does this imply that the copyright statute as a whole – or, less radically, some specific applications of it – should be deemed unconstitutional?

Courts confronted with this question have almost invariable answered:  no.  Two justifications are commonly offered in support of the compatibility of copyright and “freedom of speech.”  First, Article I, Section 8, Clause 8 of the Constitution explicitly authorizes Congress “To promote the Progress of Science and the useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries,” and there is no indication that the drafters or ratifiers of the First Amendment intended to nullify this express grant of lawmaking power.  Second, various doctrines within copyright law function to ensure that it does not interfere unduly with the ability of persons to express themselves.  Specifically, the principle that only the particular way in which an idea is “expressed” is copyrightable, not the idea itself, ensures that the citizenry will be able to discuss concepts, arguments, facts, etc. without restraint.  Even more importantly, the fair use doctrine (discussed in the first module) provides a generous safe harbor to people making reasonable uses of copyrighted material for educational, critical, or scientific purposes.  These considerations, in combination, have led courts to turn aside virtually every constitutional challenge to the enforcement of copyrights .

Very recently, some of the ways in which copyright law has been modified and then applied to activity on the Internet has prompted a growing number of scholars and litigants to suggest that the conventional methods for reconciling copyright law and the First Amendment need to be reexamined.   Two developments present the issue especially sharply:

(1) For reasons we explored in the second module , last summer a federal court in New York ruled that posting on a website a link to another website from which a web surfer can download a software program designed to break an encryption system constitutes “trafficking” in anti-circumvention technology in violation of the Digital Millennium Copyright Act.  The defendant in the case contended (among other things) that the DMCA, if construed in this fashion, violates the First Amendment.  Judge Kaplan rejected this contention, reasoning that a combination of the Copyright Clause and an generous understanding of the "Necessary and Proper" clause of the Constitution provided constitutional support for the DMCA:  

In enacting the DMCA, Congress found that the restriction of technologies for the circumvention of technological means of protecting copyrighted works "facilitate[s] the robust development and world-wide expansion of electronic commerce, communications, research, development, and education" by "mak[ing] digital networks safe places to disseminate and exploit copyrighted materials." That view can not be dismissed as unreasonable. Section 1201(a)(2) of the DMCA therefore is a proper exercise of Congress' power under the Necessary and Proper Clause. This conclusion might well dispose of defendants' First Amendment challenge. Given Congress' justifiable view that the DMCA is instrumental in carrying out the objective of the Copyright Clause, there arguably is no First Amendment objection to prohibiting the dissemination of means for circumventing technological methods for controlling access to copyrighted works. But the Court need not rest on this alone. In determining the constitutionality of governmental restriction on speech, courts traditionally have balanced the public interest in the restriction against the public interest in the kind of speech at issue.  This approach seeks to determine, in light of the goals of the First Amendment, how much protection the speech at issue merits. It then examines the underlying rationale for the challenged regulation and assesses how best to accommodate the relative weights of the interests in free speech interest and the regulation. As Justice Brandeis wrote, freedom of speech is important both as a means to achieve a democratic society and as an end in itself.  Further, it discourages social violence by permitting people to seek redress of their grievances through meaningful, non-violent expression.  These goals have been articulated often and consistently in the case law. The computer code at issue in this case does little to serve these goals. Although this Court has assumed that DeCSS has at least some expressive content, the expressive aspect appears to be minimal when compared to its functional component.  Computer code primarily is a set of instructions which, when read by the computer, cause it to function in a particular way, in this case, to render intelligible a data file on a DVD. It arguably "is best treated as a virtual machine . . . ." On the other side of this balance lie the interests served by the DMCA. Copyright protection exists to "encourage individual effort by personal gain" and thereby "advance public welfare" through the "promot[ion of] the Progress of Science and useful Arts."  The DMCA plainly was designed with these goals in mind. It is a tool to protect copyright in the digital age. It responds to the risks of technological circumvention of access controlling mechanisms designed to protect copyrighted works distributed in digital form. It is designed to further precisely the goals articulated above, goals of unquestionably high social value. This is quite clear in the specific context of this case. Plaintiffs are eight major motion picture studios which together are largely responsible for the development of the American film industry. Their products reach hundreds of millions of viewers internationally and doubtless are responsible for a substantial portion of the revenue in the international film industry each year. To doubt the contribution of plaintiffs to the progress of the arts would be absurd. DVDs are the newest way to distribute motion pictures to the home market, and their popularity is growing rapidly. The security of DVD technology is central to the continued distribution of motion pictures in this format. The dissemination and use of circumvention technologies such as DeCSS would permit anyone to make flawless copies of DVDs at little expense.  Without effective limits on these technologies, copyright protection in the contents of DVDs would become meaningless and the continued marketing of DVDs impractical. This obviously would discourage artistic progress and undermine the goals of copyright. The balance between these two interests is clear. Executable computer code of the type at issue in this case does little to further traditional First Amendment interests. The DMCA, in contrast, fits squarely within the goals of copyright, both generally and as applied to DeCSS. In consequence, the balance of interests in this case falls decidedly on the side of plaintiffs and the DMCA.  

One of the axes of debate in the ongoing appeal of the lower-court ruling concerns this issue.  For a challenge to Judge Kaplan's discussion of the First-Amendment, see the amicus brief submitted to the Second Circuit by a group of law professors .

(2) Some scholars believe that the ambit of the fair use doctrine should and will shrink on the Internet.  Why?  Because, in their view, the principal purpose of the doctrine is to enable people to use copyrighted materials in ways that are socially valuable but that are likely, in the absence of a special legal privilege, to be blocked by transaction costs.  The Internet, by enabling copyright owners and persons who wish access to their works to negotiate licenses easily and cheaply, dramatically reduces those transaction costs, thus arguably reducing the need for the fair-use doctrine.  Recall that one of the justifications conventionally offered to explain the compatibility of copyright law and the First Amendment is the safety valve afforded critical commentary and educational activity by the fair use doctrine.  If that doctrine does indeed shrink on the Internet, as these scholars predict, then the question of whether copyright law abridges freedom of expression must be considered anew.

   

Discussion Topics

1.  Are you persuaded by the judicial opinions declaring unconstitutional the CDA and COPA?  Should CHIPA suffer the same fate?  Are there any ways in which government might regulate the Internet so as to shield children from pornography?

2.  Some authors have suggested that the best way to respond to pornography on the Internet is through "zoning."  For example, Christopher Furlow suggests the use of “restricted top-level domains” or “rTLDs” which would function similarly to area codes to identify particular areas of the Internet and make it easier for parents to control what type of material their children are exposed to online.  See Erogenous Zoning on The Cyber-Frontier, 5 Va. J.L. & Tech. 7, 4  (Spring 2000) .  Do you find this proposal attractive?  practicable?  effective?

3.   Elizabeth Marsh raises the following question:  Suppose that the Ku Klux Klan sent unsolicited email messages to large numbers of African-Americans and Jews.  Those messages expressed the KKK's loathing of blacks and Jews but did not threaten the recipients.  Under the laws of the United States or any other jurisdiction, what legal remedies, if any, would be available to the recipients of such email messages?  Should the First Amendment be construed to shield "hate spam" of this sort?  More broadly, should "hate spam" be tolerated or suppressed?  For Marsh's views on the matter, see " Purveyors of Hate on the Internet: Are We Ready for Hate Spam ?", 17 Ga. St. U. L. Rev. 379 (Winter 2000).

4.  Were the Jake Baker and Nuremberg Files cases decided correctly?  How would you draw the line between "threats" subject to criminal punishment and "speech" protected by the First Amendment?

5.  Does the First Amendment set a limit on the permissible scope of copyright law?  If so, how would you define that limit?

6.  Lyrissa Lidsky , points out that the ways in which the Supreme Court has deployed the First Amendment to limit the application of the tort of defamation are founded on the assumption that most defamation suits will be brought against relatively powerful institutions (e.g., newspapers, television stations).  The Internet, by enabling relatively poor and powerless persons to broadcast to the world their opinions of powerful institutions (e.g., their employers, companies by which they feel wronged) increases the likelihood that, in the future, defamation suits will be brought most often by formidable plaintiffs against weak individual defendants.  If we believe that "[t]he Internet is . . . a powerful tool for equalizing imbalances of power by giving voice to the disenfranchised and by allowing more democratic participation in public discourse," we should be worried by this development.  Lidsky suggests that it may be necessary, in this altered climate, to reconsider the shape of the constitutional limitations on defamation.  Do you agree?  If so, how would you reformulate the relevant limitations?

7.  Like Lessig, Paul Berman suggests that the Internet should prompt us to reconsider the traditional "state action" doctrine that limits the kinds of interference with speech to which the First-Amendment applies.  Berman supports this suggestion with the following example:  “…an online service provider recently attempted to take action against an entity that had sent junk e-mail on its service, a district court rejected the e-mailer's argument that such censorship of e-mail violated the First Amendment.  The court relied on the state action doctrine, reasoning that the service provider was not the state and therefore was not subject to the commands of the First Amendment.”  Such an outcome, he suggests, is unfortunate.  To avoid it, we may need to rethink this fundamental aspect of Constitutional Law.  Do you agree?  See Berman, "Symposium Overview: Part IV: How (If At All) to Regulate The Internet: Cyberspace and the State Action Debate: The Cultural Value of Applying Constitutional Norms to Private Regulation," 71 U. Colo. L. Rev. 1263 (Fall 2000).     Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources

Additional Resources

Memorandum Opinion, Mainstream Loudoun v. Loudoun County Library , U.S. District Court, Eastern District of Virginia, Case No. 97-2049-A. (November 23, 1998)

Mainstream Loudoun v. Loudoun County Library , (Tech Law Journal Summary)

Lawrence Lessig, Tyranny of the Infrastructure , Wired 5.07 (July 1997)

Board of Education v. Pico

ACLU Report, "Fahrenheit 451.2: Is Cyberspace Burning?"

Reno v. ACLU

ACLU offers various materials relating to the Reno v. ACLU case.

Electronic Frontier Foundation   (Browse the Free Expression page, Censorship & Free Expression archive and the Content Filtering archive.)

The Electronic Privacy Information Center (EPIC) offers links to various aspects of CDA litigation and discussion.

Platform for Internet Content Selection (PICS)  (Skim the "PICS and Intellectual Freedom FAQ".  Browse "What Governments, Media and Individuals are Saying about PICS (pro and con)".)

Jason Schlosberg, Judgment on "Nuremberg": An Analysis of Free Speech and Anti-Abortion Threats Made on the Internet , 7 B.U. J. SCI. & TECH. L. (Winter 2001)

CyberAngels.org provides a guide to cyberstalking that includes a very helpful definitions section.

Cyberstalking: A New Challenge for Law Enforcement and Industry – A Report from the Attorney General to the Vice President (August 1999) provides very helpful definitions and explanations related to cyberstalking, including 1 st Amendment implications; also provides links to additional resources.

National Center for Victims of Crime

The Anti-Defamation League web site offers a wealth of resources for dealing with hate online , including guides for parents and filtering software.  The filtering software, called Hate Filter, is designed to give parents the ability to make decisions regarding what their children are exposed to online.  The ADL believes that “Censorship is not the answer to hate on the Internet. ADL supports the free speech guarantees embodied in the First Amendment of the United States Constitution, believing that the best way to combat hateful speech is with more speech.”

Laura Lorek, "Sue the bastards!."   ZDNet 3/12/2001.

"At Risk Online: Your Good Name."   ZDNet April 2001.  

Jennifer K. Swartz, " Beyond the Schoolhouse Gates: Do Students Shed Their Constitutional Rights When Communicating to a Cyber-Audience ," 48 Drake L. Rev. 587 (2000).

Watch CBS News

How free speech is under attack in the U.S.

February 20, 2022 / 9:12 AM EST / CBS News

When someone says something we disagree with, should we shut them up? In 1927, Supreme Court Justice Louis Brandeis had an answer: "The remedy to be applied is more speech, not enforced silence."

Well, in that case, the internet should have solved everything, notes correspondent David Pogue – it's nothing but more speech. And yet lately, the news is full of stories about people trying to limit other people's expression:

  • Florida lawmakers are advancing a pair of bills that would bar school districts from encouraging classroom discussions about sexual orientation or gender identity – what critics are calling "Don't Say Gay" bills.
  • Nearly a dozen states have introduced bills that would direct what students can and cannot be taught about the role of slavery in American history and the ongoing effects of racism in the U.S. today .
  • A Tennessee school board removed "Maus," a Pulitzer Prize-winning graphic novel about the Holocaust, from its curriculum.
  • Spotify faced growing controversy over episodes of Joe Rogan's podcast containing racial slurs and COVID-19 misinformation .
  • An incoming Georgetown Law administrator was assailed by a student group for posting a "racist, sexist, and misogynistic" tweet that criticized President Joe Biden's announcement that he would nominate a Black woman for the Supreme Court.

"I would argue that the culture of free speech is under attack in the U.S.," said Jacob Mchangama, the author of "Free Speech," a new book that documents the history of free expression. "And without a robust culture of free speech based on tolerance, the laws and constitutional protection will ultimately erode.

free-speech-cover-basic-books.jpg

"People both on the left and the right are sort of coming at free speech from different angles with different grievances, that point to a general loss of faith in the First Amendment."

The free-speech erosion is even happening in schools. Since January last year, according to PEN America, Republican lawmakers have introduced more than 150 state laws that would restrict how teachers can discuss race, sexual orientation, and gender identity in the classroom.

Jennifer Given, who teaches high-school history in Hollis, New Hampshire, said of the laws, "It's about making up false narratives to further a political goal of your own.

"It's a really scary time to be a teacher," she told Pogue. "We're self-censoring, We are absolutely avoiding certain things and ideas in an effort to stay within the lines as best we understand them."

In New Hampshire, a new law limits what teachers can say about racism and sexism – and a conservative group is offering a $500 bounty to anyone who turns in a teacher who violates it.

Given said, "The ghost of Senator McCarthy is alive and well in some of our state house hallways."

Pogue asked, "What would happen to you if you did step afoul of this law?"

"That can result in the loss of your license," she replied. "And so, I would not only be unemployable at my school, but I would be unemployable anywhere."

"But what I don't understand is, this is New Hampshire, whose motto is, 'Live Free or Die'!"

"Yeah, yeah," Given laughed. "There's a lot of emphasis on the 'or die' part of late!"

educational-gag-orders.jpg

UC Berkeley professor John Powell, an expert on civil liberties and democracy, said of the classroom prohibitions, "That's a very serious freedom of speech issue. To me, that is so far off the rail."

He's especially alarmed at the record number of books that are being banned in schools all over the country. Conservatives object to books about sex, gender issues, and racial injustice (such as Toni Morrison's "Beloved," Alex Gino's "George," and "The 1619 Project"), and liberals object to books containing outdated racial depictions (including John Steinbeck's "Of Mice and Men," Mark Twain's "The Adventures of Huckleberry Finn," and Harper Lee's "To Kill a Mockingbird").

  • 10 Most Challenged Books Lists (American Library Association)
  • Virginia school board officials suggest burning books banned from schools ("Red & Blue")

"You can't make the Holocaust a nice thing – it wasn't a nice thing!" Powell laughed. "You can't make slavery a nice thing. 'That makes people uncomfortable.' It should make people uncomfortable! The goal of education is not comfort. So, if someone really wants to challenge the Holocaust, let 'em challenge it. But don't ban a discussion on it."

In the mid-1800s, English philosopher John Stuart Mill proposed that governments limit free speech only when it would cause harm to others.

Powell said, "He wrote a book called 'On Liberty,' [about] freedom. And he was very concerned about the government silencing people, that citizens had to have the right to express themselves."

Our laws have generally followed that guideline. In the U.S., public speech can't include obscenity, defamation, death threats, incitement to violence – harms.

But Powell said that the recent restrictions have more to do with culture wars than with preventing harm: " I want to regulate that 'cause I don't like it. To me, that's wrong. That's problematic."

"So, there's a difference between saying something that makes you uncomfortable, and saying something that damages society or incites to riot?" asked Pogue.

"Right, and discomfort is not the same as an injury."

But these days, there are entire new categories of speech that can lead to harm. "Now, there's a concept of disinformation, where you deliberately engage in lies, in fact to cause harm, to cause injury, to exclude some people," said Powell. "But what it really means is our understanding of the First Amendment and our understanding of free speech is evolving. It has to evolve."

  • A dozen anti-vaccine accounts are responsible for 65% of disinformation shared online, new report finds

It's probably no coincidence that the new censorship culture arose simultaneously with social networks like Facebook and Twitter.

"The First Amendment was conceived as a protection of citizens from restriction of expression by the government, and not by private companies or other entities," said Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation, and author of "Silicon Values."

silicon-values-cover-verso.jpg

Pogue asked, "So for example, Donald Trump getting kicked off Twitter and Facebook ? Is that censorship? Is that bad censorship? Is that good censorship?"

"I think Trump getting kicked off of Facebook and Twitter is kind of complicated," York said. "But the thing that really concerns me the most is that someone like Mark Zuckerberg, whom none of us elected, has the power to remove an elected official. I think that should really worry us, even if we do feel that Trump should be silenced."

York said that the big tech companies censor our speech every day, sometimes by mistake, but always without supervision or transparency. "We saw protest content around Black Lives Matter removed on Facebook's platform, wrongfully," she said. "LGBTQ content has been removed. as well as things like art and satire."

  • Faceoff against Facebook: Stopping the flow of misinformation ("Sunday Morning")
  • Texas governor signs law prohibiting social media platforms from banning users
  • Lawmakers vow stricter regulations on social media platforms to combat misinformation

According to Jacob Mchangama, social networks censor us in another way, too, by making us afraid to speak at all: "There was actually this survey from 2020 by the Cato Institute which showed that 62% of Americans self-censor, who are afraid to sort of express their political views on specific topics.

"It shows this paradox: Americans enjoy the strongest legal constitutional protection of free speech probably in world history. But they still fear the consequences of being fired for speaking out on certain political views. And that's not a healthy sign."

But it's not just America. Since 2019, at least 37 countries have passed laws that increase censorship (of individuals or the media), including in Europe, where Jillian York lives. "There's a lot of debate right now in Germany, for example, over a fairly recent law that restricts hate speech online," York said, "but also creates penalties for things like the country's insult law . So, you know, insulting someone online could be penalized financially."

Overall, it would be easy to get depressed by these attacks on free speech. Especially if you're a teacher, like Jennifer Given.

Pogue asked, "What's the end point for you, if this keeps going this way in New Hampshire?"

"I don't know," she laughed. "There is a point where you start going, 'Maybe I've had it.'"

But if it cheers you up any, Jacob Mchangama points out that we still enjoy more freedom of speech than most countries: "If we were having this discussion in Russia or Turkey, you know, someone would pick me up when I go down on the street, and you might not hear from me for a long time."

He said we should fight to maintain our freedom of civil discussion – and never take it for granted.

"I'm not saying that free speech is just great, and doesn't entail any consequences; it does," he said. "You know, we should think about, how do we mitigate misinformation? How can we ensure that we counter hate speech without compromising free speech?

"And, you know, it's an experiment. But I would argue that it's been a very beneficial experiment. And one which is very much worth continuing."

       For more info:

  • "Free Speech: A History from Socrates to Social Media"  by Jacob Mchangama (Basic Books), in Hardcover, eBook and Audio formats, available via  Amazon  and  Indiebound
  • Follow  Jacob Mchangama on Twitter
  • John A. Powell, professor, University of California, Berkeley School of Law
  • "Silicon Values: The Future of Free Speech Under Surveillance Capitalism"  by Jillian C. York (Verso), in Hardcover, eBook and Audio formats, available via  Amazon  and  Indiebound
  • jilliancyork.com
  • Electronic Frontier Foundation
  • Jillian York photo: Nadine Barišić

       Story produced by Mark Hudspeth. Editor: Mike Levine.

        See also

  • Censorship on social media? It's not what you think (CBS Reports)
  • The psychology behind "cancel culture": Is it justice or censorship?

More from CBS News

John Grisham, Jim McCloskey talk wrongful convictions amid Texas case

American Airlines adds oat milk creamer to menu after PETA campaign

Brett Goldstein talks new role in "Shrinking," future of "Ted Lasso"

Consumer watchdog cautions companies against snooping on workers

Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth

  • Share Chicago Journal of International Law | Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth on Facebook
  • Share Chicago Journal of International Law | Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth on Twitter
  • Share Chicago Journal of International Law | Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth on Email
  • Share Chicago Journal of International Law | Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth on LinkedIn

Download PDF

This Essay is a critical reflection on the impact of the digital revolution and the internet on three topics that shape the contemporary world: democracy, social media, and freedom of expression. Part I establishes historical and conceptual assumptions about constitutional democracy and discusses the role of digital platforms in the current moment of democratic recession. Part II discusses how, while social media platforms have revolutionized interpersonal and social communication and democratized access to knowledge and information, they also have led to an exponential spread of mis- and disinformation, hate speech, and conspiracy theories. Part III proposes a framework that balances regulation of digital platforms with the countervailing fundamental right to freedom of expression, a right that is essential for human dignity, the search for the possible truth, and democracy. Part IV highlights the role of society and the importance of media education in the creation of a free, but positive and constructive, environment on the internet.

I. Introduction

Before the internet, few actors could afford to participate in public debate due to the barriers that limited access to its enabling infrastructure, such as television channels and radio frequencies. 1 Digital platforms tore down this gate by creating open online communities for user-generated content, published without editorial control and at no cost. This exponentially increased participation in public discourse and the amount of information available. 2 At the same time, it led to an increase in disinformation campaigns, hate speech, slander, lies, and conspiracy theories used to advance antidemocratic goals. Platforms’ attempts to moderate speech at scale while maximizing engagement and profits have led to an increasingly prominent role for content moderation algorithms that shape who can participate and be heard in online public discourse. These systems play an essential role in the exercise of freedom of expression and in democratic competence and participation in the 21st century.

In this context, this Essay is a critical reflection on the impacts of the digital revolution and of the internet on democracy and freedom of expression. Part I establishes historical and conceptual assumptions about constitutional democracy; it also discusses the role of digital platforms in the current moment of democratic recession. Part II discusses how social media platforms are revolutionizing interpersonal and social communication, and democratizing access to knowledge and information, but also lead to an exponential spread of mis- and disinformation, hate speech and conspiracy theories. Part III proposes a framework for the regulation of digital platforms that seeks to find the right balance with the countervailing fundamental right to freedom of expression. Part IV highlights the role of society and the importance of media education in the creation of a free, but positive and constructive, environment on the internet.

II. Democracy and Authoritarian Populism

Constitutional democracy emerged as the predominant ideology of the 20th century, rising above the alternative projects of communism, fascism, Nazism, military regimes, and religious fundamentalism . 3 Democratic constitutionalism centers around two major ideas that merged at the end of the 20th century: constitutionalism , heir of the liberal revolutions in England, America, and France, expressing the ideas of limited power, rule of law, and respect for fundamental rights; 4 and democracy , a regime of popular sovereignty, free and fair elections, and majority rule. 5 In most countries, democracy only truly consolidated throughout the 20th century through universal suffrage guaranteed with the end of restrictions on political participation based on wealth, education, sex, or race. 6

Contemporary democracies are made up of votes, rights, and reasons. They are not limited to fair procedural rules in the electoral process, but demand respect for substantive fundamental rights of all citizens and a permanent public debate that informs and legitimizes political decisions. 7 To ensure protection of these three aspects, most democratic regimes include in their constitutional framework a supreme court or constitutional court with jurisdiction to arbitrate the inevitable tensions that arise between democracy’s popular sovereignty and constitutionalism’s fundamental rights. 8 These courts are, ultimately, the institutions responsible for protecting fundamental rights and the rules of the democratic game against any abuse of power attempted by the majority. Recent experiences in Hungary, Poland, Turkey, Venezuela, and Nicaragua show that when courts fail to fulfill this role, democracy collapses or suffers major setbacks. 9

In recent years, several events have challenged the prevalence of democratic constitutionalism in many parts of the world, in a phenomenon characterized by many as democratic recession. 10 Even consolidated democracies have endured moments of turmoil and institutional discredit, 11 as the world witnessed the rise of an authoritarian, anti-pluralist, and anti-institutional populist wave posing serious threats to democracy.

Populism can be right-wing or left-wing, 12 but the recent wave has been characterized by the prevalence of right-wing extremism, often racist, xenophobic, misogynistic, and homophobic. 13 While in the past the far left was united through Communist International, today it is the far right that has a major global network. 14 The hallmark of right-wing populism is the division of society into “us” (the pure, decent, conservatives) and “them” (the corrupt, liberal, cosmopolitan elites). 15 Authoritarian populism flows from the unfulfilled promises of democracy for opportunities and prosperity for all. 16 Three aspects undergird this democratic frustration: political (people do not feel represented by the existing electoral systems, political leaders, and democratic institutions); social (stagnation, unemployment, and the rise of inequality); and cultural identity (a conservative reaction to the progressive identity agenda of human rights that prevailed in recent decades with the protection of the fundamental rights of women, African descendants, religious minorities, LGBTQ+ communities, indigenous populations, and the environment). 17

Extremist authoritarian populist regimes often adopt similar strategies to capitalize on the political, social, and cultural identity-based frustrations fueling democratic recessions. These tactics include by-pass or co-optation of the intermediary institutions that mediate the interface between the people and the government, such as the legislature, the press, and civil society. They also involve attacks on supreme courts and constitutional courts and attempts to capture them by appointing submissive judges. 18 The rise of social media potentializes these strategies by creating a free and instantaneous channel of direct communication between populists and their supporters. 19 This unmediated interaction facilitates the use of disinformation campaigns, hate speech, slander, lies, and conspiracy theories as political tools to advance antidemocratic goals. The instantaneous nature of these channels is ripe for impulsive reactions, which facilitate verbal attacks by supporters and polarization, feeding back into the populist discourse. These tactics threaten democracy and free and fair elections because they deceive voters and silence the opposition, distorting public debate. Ultimately, this form of communication undermines the values that justify the special protection of freedom of expression to begin with. The “truth decay” and “fact polarization” that result from these efforts discredit institutions and consequently foster distrust in democracy. 20

III. Internet, Social Media, and Freedom of Expression 21

The third industrial revolution, also known as the technological or digital revolution, has shaped our world today. 22 Some of its main features are the massification of personal computers, the universalization of smartphones and, most importantly, the internet. One of the main byproducts of the digital revolution and the internet was the emergence of social media platforms such as Facebook, Instagram, YouTube, TikTok and messaging applications like WhatsApp and Telegram. We live in a world of apps, algorithms, artificial intelligence, and innovation occurring at breakneck speed where nothing seems truly new for very long. This is the background for the narrative that follows.

A. The Impact of the Internet

The internet revolutionized the world of interpersonal and social communication, exponentially expanded access to information and knowledge, and created a public sphere where anyone can express ideas, opinions, and disseminate facts. 23 Before the internet, one’s participation in public debate was dependent upon the professional press, 24 which investigated facts, abided by standards of journalistic ethics, 25 and was liable for damages if it knowingly or recklessly published untruthful information. 26 There was a baseline of editorial control and civil liability over the quality and veracity of what was published in this medium. This does not mean that it was a perfect world. The number of media outlets was, and continues to be, limited in quantity and perspectives; journalistic companies have their own interests, and not all of them distinguish fact from opinion with the necessary care. Still, there was some degree of control over what became public, and there were costs to the publication of overtly hateful or false speech.

The internet, with the emergence of websites, personal blogs, and social media, revolutionized this status quo. It created open, online communities for user-generated texts, images, videos, and links, published without editorial control and at no cost. This advanced participation in public discourse, diversified sources, and exponentially increased available information. 27 It gave a voice to minorities, civil society, politicians, public agents, and digital influencers, and it allowed demands for equality and democracy to acquire global dimensions. This represented a powerful contribution to political dynamism, resistance to authoritarianism, and stimulation of creativity, scientific knowledge, and commercial exchanges. 28 Increasingly, the most relevant political, social, and cultural communications take place on the internet’s unofficial channels.

However, the rise of social media also led to an increase in the dissemination of abusive and criminal speech. 29 While these platforms did not create mis- or disinformation, hate speech, or speech that attacks democracy, the ability to publish freely, with no editorial control and little to no accountability, increased the prevalence of these types of speech and facilitated its use as a political tool by populist leaders. 30 Additionally, and more fundamentally, platform business models compounded the problem through algorithms that moderate and distribute online content. 31

B. The Role of Algorithms

The ability to participate and be heard in online public discourse is currently defined by the content moderation algorithms of a couple major technology companies. Although digital platforms initially presented themselves as neutral media where users could publish freely, they in fact exercise legislative, executive, and judicial functions because they unilaterally define speech rules in their terms and conditions and their algorithms decide how content is distributed and how these rules are applied. 32

Specifically, digital platforms rely on algorithms for two different functions: recommending content and moderating content. 33 First, a fundamental aspect of the service they offer involves curating the content available to provide each user with a personalized experience and increase time spent online. They resort to deep learning algorithms that monitor every action on the platform, draw from user data, and predict what content will keep a specific user engaged and active based on their prior activity or that of similar users. 34 The transition from a world of information scarcity to a world of information abundance generated fierce competition for user attention—the most valuable resource in the Digital Age. 35 The power to modify a person’s information environment has a direct impact on their behavior and beliefs. Because AI systems can track an individual’s online history, they can tailor specific messages to maximize impact. More importantly, they monitor whether and how the user interacts with the tailored message, using this feedback to influence future content targeting and progressively becoming more effective in shaping behavior. 36 Given that humans engage more with content that is polarizing and provocative, these algorithms elicit powerful emotions, including anger. 37 The power to organize online content therefore directly impacts freedom of expression, pluralism, and democracy. 38

In addition to recommendation systems, platforms rely on algorithms for content moderation, the process of classifying content to determine whether it violates community standards. 39 As mentioned, the growth of social media and its use by people around the world allowed for the spread of lies and criminal acts with little cost and almost no accountability, threatening the stability of even long-standing democracies. Inevitably, digital platforms had to enforce terms and conditions defining the norms of their digital community and moderate speech accordingly. 40 But the potentially infinite amount of content published online means that this control cannot be exercised exclusively by humans.

Content moderation algorithms optimize the scanning of published content to identify violations of community standards or terms of service at scale and apply measures ranging from removal to reducing reach or including clarifications or references to alternative information. Platforms often rely on two algorithmic models for content moderation. The first is the reproduction detection model , which uses unique identifiers to catch reproductions of content previously labeled as undesired. 41 The second system, the predictive model , uses machine learning techniques to identify potential illegalities in new and unclassified content. 42 Machine learning is a subtype of artificial intelligence that extracts patterns in training datasets, capable of learning from data without explicit programming to do so. 43 Although helpful, both models have shortcomings.

The reproduction detection model is inefficient for content such as hate speech and disinformation, where the potential for new and different publications is virtually unlimited and users can deliberately make changes to avoid detection. 44 The predictive model is still limited in its ability to address situations to which it has not been exposed in training, primarily because it lacks the human ability to understand nuance and to factor in contextual considerations that influence the meaning of speech. 45 Additionally, machine learning algorithms rely on data collected from the real world and may embed prejudices or preconceptions, leading to asymmetrical applications of the filter. 46 And because the training data sets are so large, it can be hard to audit them for these biases. 47

Despite these limitations, algorithms will continue to be a crucial resource in content moderation given the scale of online activities. 48 In the last two months of 2020 alone, Facebook applied a content moderation measure to 105 million publications, and Instagram to 35 million. 49 YouTube has 500 hours of video uploaded per minute and removed more than 9.3 million videos. 50 In the first half of 2020, Twitter analyzed complaints related to 12.4 million accounts for potential violations of its rules and took action against 1.9 million. 51 This data supports the claim that human moderation is impossible, and that algorithms are a necessary tool to reduce the spread of illicit and harmful content. On the one hand, holding platforms accountable for occasional errors in these systems would create wrong incentives to abandon algorithms in content moderation with the negative consequence of significantly increasing the spread of undesired speech. 52 On the other hand, broad demands for platforms to implement algorithms to optimize content moderation, or laws that impose very short deadlines to respond to removal requests submitted by users, can create excessive pressure for the use of these imprecise systems on a larger scale. Acknowledging the limitations of this technology is fundamental for precise regulation.

C. Some Undesirable Consequences

One of the most striking impacts of this new informational environment is the exponential increase in the scale of social communications and the circulation of news. Around the world, few newspapers, print publications, and radio stations cross the threshold of having even one million subscribers and listeners. This suggests the majority of these publications have a much smaller audience, possibly in the thousands or tens of thousands of people. 53 Television reaches millions of viewers, although diluted among dozens or hundreds of channels. 54 Facebook, on the other hand, has about 3 billion active users. 55 YouTube has 2.5 billion accounts. 56 WhatsApp, more than 2 billion. 57 The numbers are bewildering. However, and as anticipated, just as the digital revolution democratized access to knowledge, information, and public space, it also introduced negative consequences for democracy that must be addressed. Three of them include:

a) the increased circulation of disinformation, deliberate lying, hate speech, conspiracy theories, attacks on democracy, and inauthentic behavior, made possible by recommendation algorithms that optimize for user engagement and content moderation algorithms that are still incapable of adequately identifying undesirable content;
b) the tribalization of life, with the formation of echo chambers where groups speak only to themselves, reinforcing confirmation bias, 58 making speech progressively more radical, and contributing to polarization and intolerance; and
c) a global crisis in the business model of the professional press. Although social media platforms have become one of the main sources of information, they do not produce their own content. They hire engineers, not reporters, and their interest is engagement, not news. 59 Because advertisers’ spending has migrated away from traditional news publications to technological platforms with broader reaches, the press has suffered from a lack of revenue which has forced hundreds of major publications, national and local, to close their doors or reduce their journalist workforce. 60 But a free and strong press is more than just a private business; it is a pillar for an open and free society. It serves a public interest in the dissemination of facts, news, opinions, and ideas, indispensable preconditions for the informed exercise of citizenship. Knowledge and truth—never absolute, but sincerely sought—are essential elements for the functioning of a constitutional democracy. Citizens need to share a minimum set of common objective facts from which to inform their own judgments. If they cannot accept the same facts, public debate becomes impossible. Intolerance and violence are byproducts of the inability to communicate—hence the importance of “knowledge institutions,” such as universities, research entities, and the institutional press. The value of free press for democracy is illustrated by the fact that in different parts of the world, the press is one of the only private businesses specifically referred to throughout constitutions. Despite its importance for society and democracy, surveys reveal a concerning decline in its prestige. 61

In the beginning of the digital revolution, there was a belief that the internet should be a free, open, and unregulated space in the interest of protecting access to the platform and promoting freedom of expression. Over time, concerns emerged, and a consensus gradually grew for the need for internet regulation. Multiple approaches for regulating the internet were proposed, including: (a) economic, through antitrust legislation, consumer protection, fair taxation, and copyright rules; (b) privacy, through laws restricting collection of user data without consent, especially for content targeting; and (c) targeting inauthentic behavior, content control, and platform liability rules. 62

Devising the proper balance between the indispensable preservation of freedom of expression on the one hand, and the repression of illegal content on social media on the other, is one of the most complex issues of our generation. Freedom of expression is a fundamental right incorporated into virtually all contemporary constitutions and, in many countries, is considered a preferential freedom. Several reasons have been advanced for granting freedom of expression special protection, including its roles: (a) in the search for the possible truth 63 in an open and plural society, 64 as explored above in discussing the importance of the institutional press; (b) as an essential element for democracy 65 because it allows the free circulation of ideas, information, and opinions that inform public opinion and voting; and (c) as an essential element of human dignity, 66 allowing the expression of an individual’s personality.

The regulation of digital platforms cannot undermine these values but must instead aim at its protection and strengthening. However, in the digital age, these same values that historically justified the reinforced protection of freedom of expression can now justify its regulation. As U.N. Secretary-General António Guterres thoughtfully stated, “the ability to cause large-scale disinformation and undermine scientifically established facts is an existential risk to humanity.” 67

Two aspects of the internet business model are particularly problematic for the protection of democracy and free expression. The first is that, although access to most technological platforms and applications is free, users pay for access with their privacy. 68 As Lawrence Lessig observed, we watch television, but the internet watches us. 69 Everything each individual does online is monitored and monetized. Data is the modern gold. 70 Thus, those who pay for the data can more efficiently disseminate their message through targeted ads. As previously mentioned, the power to modify a person’s information environment has a direct impact on behavior and beliefs, especially when messages are tailored to maximize impact on a specific individual. 71

The second aspect is that algorithms are programmed to maximize time spent online. This often leads to the amplification of provocative, radical, and aggressive content. This in turn compromises freedom of expression because, by targeting engagement, algorithms sacrifice the search for truth (with the wide circulation of fake news), democracy (with attacks on institutions and defense of coups and authoritarianism), and human dignity (with offenses, threats, racism, and others). The pursuit of attention and engagement for revenue is not always compatible with the values that underlie the protection of freedom of expression.

IV. A Framework for the Regulation of Social Media

Platform regulation models can be broadly classified into three categories: (a) state or government regulation, through legislation and rules drawing a compulsory, encompassing framework; (b) self-regulation, through rules drafted by platforms themselves and materialized in their terms of use; and (c) regulated self-regulation or coregulation, through standards fixed by the state but which grant platform flexibility in materializing and implementing them. This Essay argues for the third model, with a combination of governmental and private responsibilities. Compliance should be overseen by an independent committee, with the minority of its representatives coming from the government, and the majority coming from the business sector, academia, technology entities, users, and civil society.

The regulatory framework should aim to reduce the asymmetry of information between platforms and users, safeguard the fundamental right to freedom of expression from undue private or state interventions, and protect and strengthen democracy. The current technical limitations of content moderation algorithms explored above and normal substantive disagreement about what content should be considered illegal or harmful suggest that an ideal regulatory model should optimize the balance between the fundamental rights of users and platforms, recognizing that there will always be cases where consensus is unachievable. The focus of regulation should be the development of adequate procedures for content moderation, capable of minimizing errors and legitimizing decisions even when one disagrees with the substantive result. 72 With these premises as background, the proposal for regulation formulated here is divided into three levels: (a) the appropriate intermediary liability model for user-generated content; (b) procedural duties for content moderation; and (c) minimum duties to moderate content that represents concrete threats to democracy and/or freedom of expression itself.

A. Intermediary Liability for User-Generated Content

There are three main regimes for platform liability for third-party content. In strict liability models, platforms are held responsible for all user-generated posts. 73 Since platforms have limited editorial control over what is posted and limited human oversight over the millions of posts made daily, this would be a potentially destructive regime. In knowledge-based liability models, platform liability arises if they do not act to remove content after an extrajudicial request from users—this is also known as a “notice-and-takedown” system. 74 Finally, a third model would make platforms liable for user-generated content only in cases of noncompliance with a court order mandating content removal. This latter model was adopted in Brazil with the Civil Framework for the Internet (Marco Civil da Internet). 75 The only exception in Brazilian legislation to this general rule is revenge porn: if there is a violation of intimacy resulting from the nonconsensual disclosure of images, videos, or other materials containing private nudity or private sexual acts, extrajudicial notification is sufficient to create an obligation for content removal under penalty of liability. 76

In our view, the Brazilian model is the one that most adequately balances the fundamental rights involved. As mentioned, in the most complex cases concerning freedom of expression, people will disagree on the legality of speech. Rules holding platforms accountable for not removing content after mere user notification create incentives for over-removal of any potentially controversial content, excessively restricting users’ freedom of expression. If the state threatens to hold digital platforms accountable if it disagrees with their assessment, companies will have the incentive to remove all content that could potentially be considered illicit by courts to avoid liability. 77

Nonetheless, this liability regime should coexist with a broader regulatory structure imposing principles, limits, and duties on content moderation by digital platforms, both to increase the legitimacy of platforms’ application of their own terms and conditions and to minimize the potentially devastating impacts of illicit or harmful speech.

B. Standards for Proactive Content Moderation

Platforms have free enterprise and freedom of expression rights to set their own rules and decide the kind of environment they want to create, as well as to moderate harmful content that could drive users away. However, because these content moderation algorithms are the new governors of the public sphere, 78 and because they define the ability to participate and be heard in online public discourse, platforms should abide by minimum procedural duties of transparency and auditing, due process, and fairness.

1. Transparency and Auditing

Transparency and auditing measures serve mainly to ensure that platforms are accountable for content moderation decisions and for the impacts of their algorithms. They provide users with greater understanding and knowledge about the extent to which platforms regulate speech, and they provide oversight bodies and researchers with information to understand the threats of digital services and the role of platforms in amplifying or minimizing them.

Driven by demands from civil society, several digital platforms already publish transparency reports. 79 However, the lack of binding standards means that these reports have significant gaps, no independent verification of the information provided, 80 and no standardization across platforms, preventing comparative analysis. 81 In this context, regulatory initiatives that impose minimum requirements and standards are crucial to make oversight more effective. On the other hand, overly broad transparency mandates may force platforms to adopt simpler content moderation rules to reduce costs, which could negatively impact the accuracy of content moderation or the quality of the user experience. 82 A tiered approach to transparency, where certain information is public and certain information is limited to oversight bodies or previously qualified researchers, ensures adequate protection of countervailing interests, such as user privacy and business confidentiality. 83 The Digital Services Act, 84 recently passed in the European Union, contains robust transparency provisions that generally align with these considerations. 85

The information that should be publicly provided includes clear and unambiguous terms of use, the options available to address violations (such as removal, amplification reduction, clarifications, and account suspension) and the division of labor between algorithms and humans. More importantly, public transparency reports should include information on the accuracy of automated moderation measures and the number of content moderation actions broken down by type (such as removal, blocking, and account deletion). 86 There must also be transparency obligations to researchers, giving them access to crucial information and statistics, including to the content analyzed for the content moderation decisions. 87

Although valuable, transparency requirements are insufficient in promoting accountability because they rely on users and researchers to actively monitor platform conduct and presuppose that they have the power to draw attention to flaws and promote changes. 88 Legally mandated third-party algorithmic auditing is therefore an important complement to ensure that these models satisfy legal, ethical, and safety standards and to elucidate the embedded value tradeoffs, such as between user safety and freedom of expression. 89 As a starting point, algorithm audits should consider matters such as how accurately they perform, any potential bias or discrimination incorporated in the data, and to what extent the internal mechanics are explainable to humans. 90 The Digital Services Act contains a similar proposal. 91

The market for algorithmic auditing is still emergent and replete with uncertainty. In attempting to navigate this scenario, regulators should: (a) define how often the audits should happen; (b) develop standards and best practices for auditing procedures; (c) mandate specific disclosure obligations so auditors have access to the required data; and (d) define how identified harms should be addressed. 92

2. Due Process and Fairness

To ensure due process, platforms must inform users affected by content moderation decisions of the allegedly violated provision of the terms of use, as well as offer an internal system of appeals against these decisions. Platforms must also create systems that allow for the substantiated denunciation of content or accounts by other users, and notify reporting users of the decision taken.

As for fairness, platforms should ensure that the rules are applied equally to all users. Although it is reasonable to suppose that platforms may adopt different criteria for public persons or information of public interest, these exceptions must be clear in the terms of use. This issue has recently been the subject of controversy between the Facebook Oversight Board and the company. 93

Due to the enormous amount of content published on the platforms and the inevitability of using automated mechanisms for content moderation, platforms should not be held accountable for a violation of these duties in specific cases, but only when the analysis reveals a systemic failure to comply. 94

C. Minimum Duties to Moderate Illicit Content

The regulatory framework should also contain specific obligations to address certain types of especially harmful speech. The following categories are considered by the authors to fall within this group: disinformation, hate speech, anti-democratic attacks, cyberbullying, terrorism, and child pornography. Admittedly, defining and consensually identifying the speech included in these categories—except in the case of child pornography 95 —is a complex and largely subjective task. Precisely for this reason, platforms should be free to define how the concepts will be operationalized, as long as they guide definitions by international human rights parameters and in a transparent manner. This does not mean that all platforms will reach the same definitions nor the same substantive results in concrete cases, but this should not be considered a flaw in the system, since the plurality of rules promotes freedom of expression. The obligation to observe international human rights parameters reduces the discretion of companies, while allowing for the diversity of policies among them. After defining these categories, platforms must establish mechanisms that allow users to report violations.

In addition, platforms should develop mechanisms to address coordinated inauthentic behaviors, which involve the use of automated systems or deceitful means to artificially amplify false or dangerous messages by using bots, fake profiles, trolls, and provocateurs. 96 For example, if a person publishes a post for his twenty followers saying that kerosene oil is good for curing COVID-19, the negative impact of this misinformation is limited. However, if that message is amplified to thousands of users, a greater public health issue arises. Or, in another example, if the false message that an election was rigged reaches millions of people, there is a democratic risk due to the loss of institutional credibility.

The role of oversight bodies should be to verify that platforms have adopted terms of use that prohibit the sharing of these categories of speech and ensure that, systemically, the recommendation and content moderation systems are trained to moderate this content.

V. Conclusion

The World Wide Web has provided billions of people with access to knowledge, information, and the public space, changing the course of history. However, the misuse of the internet and social media poses serious threats to democracy and fundamental rights. Some degree of regulation has become necessary to confront inauthentic behavior and illegitimate content. It is essential, however, to act with transparency, proportionality, and adequate procedures, so that pluralism, diversity, and freedom of expression are preserved.

In addition to the importance of regulatory action, the responsibility for the preservation of the internet as a healthy public sphere also lies with citizens. Media education and user awareness are fundamental steps for the creation of a free but positive and constructive environment on the internet. Citizens should be conscious that social media can be unfair, perverse, and can violate fundamental rights and basic rules of democracy. They must be attentive not to uncritically pass on all information received. Alongside states, regulators, and tech companies, citizens are also an important force to address these threats. In Jonathan Haidt’s words, “[w]hen our public square is governed by mob dynamics unrestrained by due process, we don’t get justice and inclusion; we get a society that ignores context, proportionality, mercy, and truth.” 97

  • 1 Tim Wu, Is the First Amendment Obsolete? , in The Perilous Public Square 15 (David E. Pozen ed., 2020).
  • 2 Jack M. Balkin, Free Speech is a Triangle , 118 Colum. L. Rev. 2011, 2019 (2018).
  • 3 LuĂ­s Roberto Barroso, O Constitucionalismo DemocrĂĄtico ou Neoconstitucionalismo como ideologia vitoriosa do sĂ©culo XX , 4 Revista Publicum 14, 14 (2018).
  • 4 Id. at 16.
  • 7 Ronald Dworkin, Is Democracy Possible Here?: Principles for a New Political Debate xii (2006); Ronald Dworkin, Taking Rights Seriously 181 (1977).
  • 8 Barroso, supra note 3, at 16.
  • 9 Samuel Issacharoff, Fragile Democracies: Contested Power in the Era of Constitutional Courts i (2015).
  • 10 Larry Diamond, Facing up to the Democratic Recession , 26 J. Democracy 141 (2015). Other scholars have referred to the same phenomenon using other terms, such as democratic retrogression, abusive constitutionalism, competitive authoritarianism, illiberal democracy, and autocratic legalism. See, e.g. , Aziz Huq & Tom Ginsburg, How to Lose a Constitutional Democracy , 65 UCLA L. Rev. 91 (2018); David Landau, Abusive Constitutionalism , 47 U.C. Davis L. Rev. 189 (2013); Kim Lane Scheppele, Autocratic Legalism , 85 U. Chi. L. Rev. 545 (2018).
  • 11 Dan Balz, A Year After Jan. 6, Are the Guardrails that Protect Democracy Real or Illusory? , Wash. Post (Jan. 6, 2022), https://perma.cc/633Z-A9AJ; Brexit: Reaction from Around the UK , BBC News (June 24, 2016), https://perma.cc/JHM3-WD7A.
  • 12 Cas Mudde, The Populist Zeitgeist , 39 Gov’t & Opposition 541, 549 (2004).
  • 13 See generally Mohammed Sinan Siyech, An Introduction to Right-Wing Extremism in India , 33 New Eng. J. Pub. Pol’y 1 (2021) (discussing right-wing extremism in India). See also Eviane Leidig, Hindutva as a Variant of Right-Wing Extremism , 54 Patterns of Prejudice 215 (2020) (tracing the history of “Hindutva”—defined as “an ideology that encompasses a wide range of forms, from violent, paramilitary fringe groups, to organizations that advocate the restoration of Hindu ‘culture’, to mainstream political parties”—and finding that it has become mainstream since 2014 under Modi); Ariel Goldstein, Brazil Leads the Third Wave of the Latin American Far Right , Ctr. for Rsch. on Extremism (Mar. 1, 2021), https://perma.cc/4PCT-NLQJ (discussing right-wing extremism in Brazil under Bolsonaro); Seth G. Jones, The Rise of Far-Right Extremism in the United States , Ctr. for Strategic & Int’l Stud. (Nov. 2018), https://perma.cc/983S-JUA7 (discussing right-wing extremism in the U.S. under Trump).
  • 14 Sergio Fausto, O Desafio DemocrĂĄtico [The Democratic Challenge], PiauĂ­ (Aug. 2022), https://perma.cc/474A-3849.
  • 15 Jan-Werner Muller, Populism and Constitutionalism , in The Oxford Handbook of Populism 590 (CristĂłbal Rovira Kaltwasser et al. eds., 2017).
  • 16 Ming-Sung Kuo, Against Instantaneous Democracy , 17 Int’l J. Const. L. 554, 558–59 (2019); see also Digital Populism , Eur. Ctr. for Populism Stud., https://perma.cc/D7EV-48MV.
  • 17 LuĂ­s Roberto Barroso, Technological Revolution, Democratic Recession and Climate Change: The Limits of Law in a Changing World , 18 Int’l J. Const. L. 334, 349 (2020).
  • 18 For the use of social media, see Sven Engesser et al., Populism and Social Media: How Politicians Spread a Fragmented Ideology , 20 Info. Commc’n & Soc’y 1109 (2017). For attacks on the press, see WPFD 2021: Attacks on Press Freedom Growing Bolder Amid Rising Authoritarianism , Int’l Press Inst. (Apr. 30, 2021), https://perma.cc/SGN9-55A8. For attacks on the judiciary, see Michael Dichio & Igor Logvinenko, Authoritarian Populism, Courts and Democratic Erosion , Just Sec. (Feb. 11, 2021), https://perma.cc/WZ6J-YG49.
  • 19 Kuo, supra note 16, at 558–59; see also Digital Populism , supra note 16.
  • 20 Vicki C. Jackson, Knowledge Institutions in Constitutional Democracy: Reflections on “the Press” , 15 J. Media L. 275 (2022).
  • 21 Many of the ideas and information on this topic were collected in Luna van Brussel Barroso, Liberdade de ExpressĂŁo e Democracia na Era Digital: O impacto das mĂ­dias sociais no mundo contemporĂąneo [Freedom of Expression and Democracy in the Digital Era: The Impact of Social Media in the Contemporary World] (2022), which was recently published in Brazil.
  • 22 The first industrial revolution is marked by the use of steam as a source of energy in the middle of the 18th century. The second started with the use of electricity and the invention of the internal combustion engine at the turn of the 19th to the 20th century. There are already talks of the fourth industrial revolution as a product of the fusion of technologies that blurs the boundaries among the physical, digital, and biological spheres. See generally Klaus Schwab, The Fourth Industrial Revolution (2017).
  • 23 Gregory P. Magarian, The Internet and Social Media , in The Oxford Handbook of Freedom of Speech 350, 351–52 (Adrienne Stone & Frederick Schauer eds., 2021).
  • 24 Wu, supra note 1, at 15.
  • 25 Journalistic ethics include distinguishing fact from opinion, verifying the veracity of what is published, having no self-interest in the matter being reported, listening to the other side, and rectifying mistakes. For an example of an international journalistic ethics charter, see Global Charter of Ethics for Journalists , Int’l Fed’n of Journalists (June 12, 2019), https://perma.cc/7A2C-JD2S.
  • 26 See, e.g. , New York Times Co. v. Sullivan, 376 U.S. 254 (1964).
  • 27 Balkin, supra note 2, at 2018.
  • 28 Magarian, supra note 23, at 351–52.
  • 29 Wu, supra note 1, at 15.
  • 30 Magarian, supra note 23, at 357–60.
  • 31 Niva Elkin-Koren & Maayan Perel, Speech Contestation by Design: Democratizing Speech Governance by AI , 50 Fla. State U. L. Rev. (forthcoming 2023).
  • 32 Thomas E. Kadri & Kate Klonick, Facebook v. Sullivan: Public Figures and Newsworthiness in Online Speech , 93 S. Cal. L. Rev. 37, 94 (2019).
  • 33 Elkin-Koren & Perel, supra note 31.
  • 34 Chris Meserole, How Do Recommender Systems Work on Digital Platforms? , Brookings Inst.(Sept. 21, 2022), https://perma.cc/H53K-SENM.
  • 35 Kris Shaffer, Data versus Democracy: How Big Data Algorithms Shape Opinions and Alter the Course of History xi–xv (2019).
  • 36 See generally Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (2019).
  • 37 Shaffer, supra note 35, at xi–xv.
  • 38 More recently, with the advance of neuroscience, platforms have sharpened their ability to manipulate and change our emotions, feelings and, consequently, our behavior in accordance not with our own interests, but with theirs (or of those who they sell this service to). Kaveh Waddell, Advertisers Want to Mine Your Brain , Axios (June 4, 2019), https://perma.cc/EU85-85WX. In this context, there is already talk of a new fundamental right to cognitive liberty, mental self-determination, or the right to free will. Id .
  • 39 Content moderation refers to “systems that classify user generated content based on either matching or prediction, leading to a decision and governance outcome (e.g. removal, geoblocking, account takedown).” Robert Gorwa, Reuben Binns & Christian Katzenbach, Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance , 7 Big Data & Soc’y 1, 3 (2020).
  • 40 Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation , 51 U.C. Davis L. Rev. 1149, 1183 (2018).
  • 41 See Carey Shenkman, Dhanaraj Thakur & Emma LlansĂł, Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 13–16 (May 2021),https://perma.cc/J9MP-7PQ8.
  • 42 See id. at 17–21.
  • 43 See Michael Wooldridge, A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going 63 (2021).

Perceptual hashing has been the primary technology utilized to mitigate the spread of CSAM, since the same materials are often repeatedly shared, and databases of offending content are maintained by institutions like the National Center for Missing and Exploited Children (NCMEC) and its international analogue, the International Centre for Missing & Exploited Children (ICMEC).

  • 45 Natural language understanding is undermined by language ambiguity, contextual dependence of words of non-immediate proximity, references, metaphors, and general semantics rules. See Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do 52–55 (2021). Language comprehension in fact requires unlimited common-sense knowledge about the actual world, which humans possess and is impossible to code. Id . A case decided by Facebook’s Oversight Board illustrates the point: the company’s predictive filter for combatting pornography removed images from a breast cancer awareness campaign, a clearly legitimate content not meant to be targeted by the algorithm. See Breast Cancer Symptoms and Nudity , Oversight Bd. (2020), https://perma.cc/U9A5-TTTJ. However, based on prior training, the algorithm removed the publication because it detected pornography and was unable to factor the contextual consideration that this was a legitimate health campaign. Id .
  • 46 See generally Adriano Koshiyama, Emre Kazim & Philip Treleaven, Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms , 55 Computer 40 (2022).
  • 47 Elkin-Koren & Perel, supra note 31.
  • 48 Evelyn Douek, Governing Online Speech: From “Posts-as-Trumps” to Proportionality and Probability , 121 Colum. L. Rev. 759, 791 (2021).
  • 53 See Martha Minow, Saving the Press: Why the Constitution Calls for Government Action to Preserve Freedom of Speech 20 (2021). For example, the best-selling newspaper in the world, The New York Times , ended the year 2022 with around 10 million subscribers across digital and print. Katie Robertson, The New York Times Company Adds 180,000 Digital Subscribers , N.Y. Times (Nov. 2, 2022), https://perma.cc/93PF-TKC5. The Economist magazine had approximately 1.2 million subscribers in 2022. The Economist Group, Annual Report 2022 24 (2022), https://perma.cc/9HQQ-F7W2. Around the world, publications that reach one million subscribers are rare. These Are the Most Popular Paid Subscription News Websites , World Econ. F. (Apr. 29, 2021), https://perma.cc/L2MK-VPNX.
  • 54 Lawrence Lessig, They Don’t Represent Us: Reclaiming Our Democracy 105 (2019).
  • 55 Essential Facebook Statistics and Trends for 2023 , Datareportal (Feb. 19, 2023), https://perma.cc/UH33-JHUQ.
  • 56 YouTube User Statistics 2023 , Glob. Media Insight (Feb. 27, 2023), https://perma.cc/3H4Y-H83V.
  • 57 Brian Dean, WhatsApp 2022 User Statistics: How Many People Use WhatsApp , Backlinko (Jan. 5, 2022), https://perma.cc/S8JX-S7HN.
  • 58 Confirmation bias, the tendency to seek out and favor information that reinforces one’s existing beliefs, presents an obstacle to critical thinking. Sachin Modgil et al., A Confirmation Bias View on Social Media Induced Polarisation During COVID-19 , Info. Sys. Frontiers (Nov. 20, 2021).
  • 59 Minow, supra note 53, at 2.
  • 60 Id. at 3, 11.
  • 61 On the importance of the role of the press as an institution of public interest and its “crucial relationship” with democracy, see id. at 35. On the press as a “knowledge institution,” the idea of “institutional press,” and data on the loss of prestige by newspapers and television stations, see Jackson, supra note 20, at 4–5.
  • 62 See , e.g. , Jack M. Balkin, How to Regulate (and Not Regulate) Social Media , 1 J. Free Speech L. 71, 89–96 (2021).
  • 63 By possible truth we mean that not all claims, opinions and beliefs can be ascertained as true or false. Objective truths are factual and can thus be proven even when controversial—for example, climate change and the effectiveness of vaccines. Subjective truths, on the other hand, derive from individual normative, religious, philosophical, and political views. In a pluralistic world, any conception of freedom of expression must protect individual subjective beliefs.
  • 64 Eugene Volokh, In Defense of the Marketplace of Ideas/Search for Truth as a Theory of Free Speech Protection , 97 Va. L. Rev. 595, 595 (May 2011).
  • 66 Steven J. Heyman, Free Speech and Human Dignity 2 (2008).
  • 67 A Global Dialogue to Guide Regulation Worldwide , UNESCO (Feb. 23, 2023), https://perma.cc/ALK8-HTG3.
  • 68 Can We Fix What’s Wrong with Social Media? , Yale L. Sch. News (Aug. 3, 2022), https://perma.cc/MN58-2EVK.
  • 69 Lessig, supra note 54, at 105.
  • 71 See supra Part III.B.
  • 72 Doeuk, supra note 48, at 804–13; see also John Bowers & Jonathan Zittrain, Answering Impossible Questions: Content Governance in an Age of Disinformation , Harv. Kennedy Sch. Misinformation Rev. (Jan. 14, 2020), https://perma.cc/R7WW-8MQX.
  • 73 Daphne Keller, Systemic Duties of Care and Intermediary Liability , Ctr. for Internet & Soc’y Blog (May 28, 2020), https://perma.cc/25GU-URGT.
  • 75 Decreto No. 12.965, de 23 de abril de 2014, Diário Oficial da União [D.O.U.] de 4.14.2014 (Braz.) art. 19. In order to ensure freedom of expression and prevent censorship, providers of internet applications can only be civilly liable for damages resulting from content generated by third parties if, after specific court order, they do not make arrangements to, in the scope and technical limits of their service and within the indicated time, make unavailable the content identified as infringing, otherwise subject to the applicable legal provisions. Id .
  • 76 Id. art. 21. The internet application provider that provides content generated by third parties will be held liable for the violation of intimacy resulting from the disclosure, without authorization of its participants, of images, videos, or other materials containing nude scenes or private sexual acts when, upon receipt of notification by the participant or its legal representative, fail to diligently promote, within the scope and technical limits of its service, the unavailability of this content. Id .
  • 77 Balkin, supra note 2, at 2017.
  • 78 Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech , 131 Harv. L. Rev. 1598, 1603 (2018).
  • 79 Transparency Reporting Index, Access Now (July 2021), https://perma.cc/2TSL-2KLD (cataloguing transparency reporting from companies around the world).
  • 80 Hum. Rts. Comm., Rep. of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, ¶¶ 63–66, U.N. Doc A/HRC/32/35 (2016).
  • 81 Paddy Leerssen, The Soap Box as a Black Box: Regulating Transparency in Social Media Recommender Systems , 11 Eur. J. L. & Tech. (2020).
  • 82 Daphne Keller, Some Humility About Transparency , Ctr. for Internet & Soc’y Blog (Mar. 19, 2021), https://perma.cc/4Y85-BATA.
  • 83 Mark MacCarthy, Transparency Requirements for Digital Social Media Platforms: Recommendations for Policy Makers and Industry , Transatlantic Working Grp. (Feb. 12, 2020).
  • 84 2022 O.J. (L 277) 1 [hereinafter DSA].
  • 85 The DSA was approved by the European Parliament on July 5, 2022, and on October 4, 2022, the European Council gave its final acquiescence to the regulation. Digital Services: Landmark Rules Adopted for a Safer, Open Online Environment , Eur. Parliament (July 5, 2022), https://perma.cc/BZP5-V2B2. The DSA increases transparency and accountability of platforms, by providing, for example, for the obligation of “clear information on content moderation or the use of algorithms for recommending content (so-called recommender systems); users will be able to challenge content moderation decisions.” Id .
  • 86 MacCarthy, supra note 83, 19–24.
  • 87 To this end, American legislators recently introduced a U.S. Congressional bill that proposes a model for conducting research on the impacts of digital communications in a way that protects user privacy. See Platform Accountability and Transparency Act, S. 5339, 117th Congress (2022). The project mandates that digital platforms share data with researchers previously authorized by the Federal Trade Commission and publicly disclose certain data about content, algorithms, and advertising. Id .
  • 88 Yifat Nahmias & Maayan Perel, The Oversight of Content Moderation by AI: Impact Assessment and Their Limitations , 58 Harv. J. on Legis. 145, 154–57 (2021).
  • 89 Auditing Algorithms: The Existing Landscape, Role of Regulator and Future Outlook , Digit. Regul. Coop. F. (Sept. 23, 2022), https://perma.cc/7N6W-JNCW.
  • 90 See generally Koshiyama et al., supra note 46.
  • 91 In Article 37, the DSA provides that digital platforms of a certain size should be accountable, through annual independent auditing, for compliance with the obligations set forth in the Regulation and with any commitment undertaken pursuant to codes of conduct and crisis protocols.
  • 92 Digit. Regul. Coop. F., supra note 89.
  • 93 In a transparency report published at the end of its first year of operation, the Oversight Board highlighted the inadequacy of the explanations presented by Meta on the operation of a system known as cross-check, which apparently gave some users greater freedom on the platform. In January 2022, Meta explained that the cross-check system grants an additional degree of review to certain content that internal systems mark as violating the platform’s terms of use. Meta submitted a query to the Board on how to improve the functioning of this system and the Board made relevant recommendations. See Oversight Board Published Policy Advisory Opinion on Meta’s Cross-Check Program , Oversight Bd. (Dec. 2022), https://perma.cc/87Z5-L759.
  • 94 Evelyn Douek, Content Moderation as Systems Thinking , 136 Harv. L. Rev. 526, 602–03 (2022).
  • 95 The illicit nature of child pornography is objectively apprehended and does not implicate the same subjective considerations that the other referenced categories entail. Not surprisingly, several databases have been created to facilitate the moderation of this content. See Ofcom, Overview of Perceptual Hashing Technology 14 (Nov. 22, 2022), https://perma.cc/EJ45-B76X (“Several hash databases to support the detection of known CSAM exist, e.g. the National Center for Missing and Exploited Children (NCMEC) hash database, the Internet Watch Foundation (IWF) hash list and the International Child Sexual Exploitation (ICSE) hash database.”).
  • 97 Jonathan Haidt, Why the Past 10 Years of American Life Have Been Uniquely Stupid , Atlantic (Apr. 11, 2022), https://perma.cc/2NXD-32VM.

Donate to liberties

Tech & rights, ​free speech on social media: filtering methods, rights, future prospects, yes, our right to free speech absolutely exists online. but there is serious debate about how to regulate our freedom of speech in the online sphere, particularly on social media..

freedom of speech on the internet is an example of

by LibertiesEU

freedom of speech on the internet is an example of

Knowledge is power. Your contribution counts.

The rise of social media has implications for our fundamental rights, perhaps none more so than our freedom of speech. There is no doubt that our right to free speech extends online. But there is considerable and complex debate on how to regulate the online sphere, particularly social media. How the regulations are constructed, where the lines are drawn, will have huge implications for our freedom of speech on social media.

What does free speech mean?

Free speech means you have the freedom to express yourself in any way that does not take away the rights of other people. You can (and should) feel free to criticize the work your elected officials are doing. You should not feel free to hold band practice late into the night, because that could take away your neighbors’ right to privacy. And when they complain about the noise, you can’t encourage people to destroy their property or worse. But up to that point, you’re free to express yourself.

This is why free speech is so central to democracy. Democracy means that everyone in society makes collective decisions about the laws they live under and who administers them. The free exchange of ideas, opinions and information provide us with the knowledge we need to make those decisions. That’s also why free speech and the organs that support it, such as free media and civil society, are often the first things that disappear in autocracies.

We all deserve to have our say

But it is becoming harder to speak up about the issues we care about. Support Liberties standing up for our right to free speech.

Free speech gives us our voice

How free is speech on social media and on the internet in general.

The extent to which someone can freely express themselves online varies from country to country. In the EU, the bloc has laws that protect our freedom to express ourselves online. In some cases, the ease of online speech has allowed it to step far beyond the bounds of free speech – consider online bullying or threats, or the sharing of extremist content or child pornography. These forms of “expression” are not protected speech.

But in other areas, drawing the line is more complicated. The EU has been dealing with how to protect the rights of copyright owners against the right of people to share legal content. Should such an enormous and difficult task be farmed out to AI? Surely some of it must be, but how this is done could have profound implications for free speech.

Liberties has been adamant that compromising free speech, even putting it at potential risk, is a no-go. And that’s how it should be – if we are to err, let it be that not enough of our fundamental right to free speech was limited, and not that we gave too much of it away. That’s why we’ve advocated for users’ free speech during the EU’s work on new copyright law. And why we warned European decision-makers that their plan to regulate online terrorist content might unduly restrict free speech .

We are also mindful of the role online platforms have in determining free speech. Although we may use their services to share our thoughts, there is an obvious danger in making them arbiters of what is and is not free speech. Such decisions need to be made by independent judges, and certainly not by companies with a vested interest in making sure the content they allow and promote is good business for them.

What is important to know about free speech rights on social media?

The rise of social media has given new importance to protecting free speech. People are often able to stay anonymous when they say things – not necessarily a bad thing, especially in places where criticizing the government can put you or your family in danger. Or when you want to seek help for a private medical issue. But social media allows people to use anonymity to bully, harass, intimidate or stalk people.

Social media also gives everyone a platform. Again, this is not an inherently bad thing. It not only allows anyone to share their ideas, but connects us faster and cheaper, allowing us to exchange ideas and create things. But it also gives people the ability to easily spread disinformation that can cause harm both to individuals and society as a whole.

freedom of speech on the internet is an example of

How do social media companies filter speech?

Social media companies can filter speech, and thus limit free speech, by using both humans and artificial intelligence to review content that might not be free to share. They can remove what you share or block you from sharing content lawfully if your content is not protected speech, for instance if you use social media to incite violence against someone. And, of course, social media companies have terms of service that have myriad more causes for sanction. (Although it can be the case that their terms of services can breach the law by limiting lawful content.)

Speaking up starts with getting informed.

Perhaps the most drastic form of social media filtering speech is by blocking some people from using their service at all. This has the effect of limiting the voices that can be heard on a platform. Some would argue that’s a good thing, and this is certainly the case when people have spread hate speech or incited violence. These issues were front and center when a certain former president of the United States was blocked from Twitter and Facebook following the attack on the U.S. Capitol.

What does the future hold for free speech on social media?

It may be a short and disappointing answer, but the truth is that we don’t know what the future holds. There seems to be a consensus that we shouldn’t allow illegal content to be shared on the internet. But it’s easier said than done. Companies, politicians and rights groups all have disagreements about how exactly to do this, and which considerations should be given more weight than others.

Regulating online speech is complicated. But if we leave it up to social media companies and their algorithms, our free speech, and thus our democracy, will suffer. They should use a fraction of their profits to create a complaints system where you can always request human review of a decision to filter content. And, if necessary, anyone should be able to go to a judge to have their case heard.

Help us fight for free speech Donate Social media platforms should not just be held accountable for leaving illegal content online. They should also bear responsibility for taking down legal content. This incentivizes them to create a review system that appropriately considers the free speech of the user. And to ensure that this remains the case, the tech industry must be properly regulated. This ensures that they can continue to grow and prosper without our rights being restricted.

But the truth is, at the moment we don’t really know how their algorithms work. We don’t really know how much material they remove or block, or for what reasons, or how they curate our news feed. To make sure they’re doing their best to protect free speech, all this information has to be available to researchers, authorities and independent watchdogs, like Liberties, who can check on them.

Value knowledge by supporting Liberties All great movements begin with sharing information. Our explainer articles help you understand the most pressing human rights issues, so together we can stand up for what matters. Support us by buying one of our activist authors a cup of coffee. Add your voice to ours. Donate today.

More Stories

freedom of speech on the internet is an example of

European Commission’s Rule of Law Report 2024: Gap Analysis

freedom of speech on the internet is an example of

Liberties' September 2024 Recap

freedom of speech on the internet is an example of

Realising Protection For Human Rights Defenders And Civil Society Organisations In Europe: Mapping & Recommendations

Your contribution matters.

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

We’re grateful to all our supporters

Your contributions help us in the following ways

â–ș Liberties remains independent â–ș It provides a stable income, enabling us to plan long-term â–ș We decide our mission, so we can focus on the causes that matter â–ș It makes us stronger and more impactful

Subscribe to stay in

You will get the latest reports before anyone else!

You can follow what we are doing for your rights!

You will know about our achivements!

Show me a sample!

  • Share full article

Advertisement

Supported by

Student Opinion

Why Is Freedom of Speech an Important Right? When, if Ever, Can It Be Limited?

freedom of speech on the internet is an example of

By Michael Gonchar

  • Sept. 12, 2018

This extended Student Opinion question and a related lesson plan were created in partnership with the National Constitution Center in advance of Constitution Day on Sept. 17. For information about a cross-classroom “Constitutional Exchange,” see The Lauder Project .

One of the founding principles of the United States that Americans cherish is the right to freedom of speech. Enshrined in the First Amendment to the Constitution, freedom of speech grants all Americans the liberty to criticize the government and speak their minds without fear of being censored or persecuted.

Even though the concept of freedom of speech on its face seems quite simple, in reality there are complex lines that can be drawn around what kinds of speech are protected and in what setting.

The Supreme Court declared in the case Schenck v. United States in 1919 that individuals are not entitled to speech that presents a “clear and present danger” to society. For example, a person cannot falsely yell “fire” in a crowded theater because that speech doesn’t contribute to the range of ideas being discussed in society, yet the risk of someone getting injured is high. On the other hand, in Brandenburg v. Ohio in 1969, the court declared that even inflammatory speech, such as racist language by a leader of the Ku Klux Klan, should generally be protected unless it is likely to cause imminent violence.

While the text and principle of the First Amendment have stayed the same, the court’s interpretation has indeed changed over time . Judges, lawmakers and scholars continue to struggle with balancing strong speech protections with the necessity of maintaining a peaceful society.

What do you think? Why is the freedom of speech an important right? Why might it be important to protect even unpopular or hurtful speech? And yet, when might the government draw reasonable limits on speech, and why?

Before answering this question, read the full text of the amendment. What does it say about speech?

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Next, read these excerpts from three recent articles about free speech cases that might affect your life:

In a September 2017 article, “ High Schools Threaten to Punish Students Who Kneel During Anthem ,” Christine Hauser writes:

The controversy over kneeling in protest of racial injustice moved beyond the world of professional sports this week, when a number of schools told students they were expected to stand during the national anthem. On Long Island, the Diocese of Rockville Centre, which runs a private Catholic school system, said students at its three high schools could face “serious disciplinary action” if they knelt during the anthem before sporting events.

In a June 2018 article, “ Colleges Grapple With Where — or Whether — to Draw the Line on Free Speech ,” Alina Tugend writes:

It has happened across the country, at small private colleges and large public universities: an invited guest is heckled or shouted down or disinvited because of opposing political views. And the incident is followed by a competing chorus of accusations about the rights of free speech versus the need to feel safe and welcome. It’s something those in higher education have grappled with for decades. But after the 2016 presidential election and the increasing polarization of the country, the issue has taken on a new resonance.

In another June 2018 article, “ Supreme Court Strikes Down Law Barring Political Apparel at Polling Places ,” Adam Liptak writes:

The Supreme Court on Thursday struck down a Minnesota law that prohibits voters from wearing T-shirts, hats and buttons expressing political views at polling places. In a cautious 7-to-2 decision, the court acknowledged the value of decorum and solemn deliberation as voters prepare to cast their ballots. But Chief Justice John G. Roberts Jr. wrote that Minnesota’s law was not “capable of reasoned application.”

Students, read at least one of the above articles in its entirety, then tell us:

— Why is the freedom of speech an important right? Why do you think it’s worth protecting?

— What is the value in protecting unpopular speech?

— The Supreme Court has determined that certain types of speech, such as fighting words, violent threats and misleading advertising, are of only “low” First Amendment value because they don’t contribute to a public discussion of ideas, and are therefore not protected. Even though the text of the First Amendment does not make any distinction between “low” and “high” value speech, do you think the court is correct in ruling that some categories of speech are not worth protecting? What types of speech would you consider to be “low” value? What types of speech are “high” value, in your opinion?

— What do you think about the free speech issues raised in the three articles above? For example:

‱ Should students be allowed to kneel during the national anthem? Why? ‱ Should colleges be allowed to forbid controversial or “offensive” guests from speaking on campus? Why? ‱ Should individuals be able to wear overtly political T-shirts or hats to the polling booth? Why?

— When might the government draw reasonable limits to the freedom of speech, and why?

— We now want to ask you an important constitutional question: When does the First Amendment allow the government to limit speech? We want to hear what you think. But to clarify, we’re not asking for your opinion about policy. In other words, we’re not asking whether a certain type of speech, like flag burning or hate speech, should be protected or prohibited. Instead, we’re asking you to interpret the Constitution: Does the First Amendment protect that speech?

Do your best to base your interpretation on the text of the amendment itself and your knowledge of how it can be understood. You may want to consult this essay in the National Constitution Center’s Interactive Constitution to learn more about how scholars and judges have interpreted the First Amendment, but rest assured, you don’t have to be a Supreme Court justice to have an opinion on this matter, and even the justices themselves often disagree.

— When you interpret the First Amendment, what do you think it has to say about the free speech issues raised in the three articles. For example:

‱ Does the First Amendment protect the right of students at government-run schools (public schools) to protest? What about students who attend private schools? ‱ Does the First Amendment allow private colleges to prohibit certain controversial speakers? What about government-run colleges (public colleges)? ‱ Finally, does the First Amendment protect voters’ right to wear whatever they want to the polling booth?

Are any of your answers different from your answers above, when you answered the three “should” questions?

— When scholars, judges and lawmakers try to balance strong speech protections with the goal of maintaining a peaceful society, what ideas or principles do you think are most important for them to keep in mind? Explain.

Students 13 and older are invited to comment. All comments are moderated by the Learning Network staff, but please keep in mind that once your comment is accepted, it will be made public.

Explore the Constitution

The constitution.

  • Read the Full Text

Dive Deeper

Constitution 101 course.

  • The Drafting Table
  • Supreme Court Cases Library
  • Founders' Library
  • Constitutional Rights: Origins & Travels

National Constitution Center Building

Start your constitutional learning journey

  • News & Debate Overview
  • Constitution Daily Blog
  • America's Town Hall Programs
  • Special Projects
  • Media Library

Photo of America’s Town Hall Event

America’s Town Hall

Watch videos of recent programs.

  • Education Overview
  • Constitution 101 Curriculum
  • Classroom Resources by Topic
  • Classroom Resources Library
  • Live Online Events
  • Professional Learning Opportunities
  • Constitution Day Resources
  • Election Teaching Resources

Khan Academy Logo

Constitution 101 With Khan Academy

Explore our new course that empowers students to learn the constitution at their own pace..

  • Explore the Museum
  • Plan Your Visit
  • Exhibits & Programs
  • Field Trips & Group Visits
  • Host Your Event
  • Buy Tickets

Photo of First Amendment Exhibit

New exhibit

The first amendment, interpretation & debate, freedom of speech and the press, matters of debate, common interpretation, fixing free speech, frontiers for free speech.

freedom of speech on the internet is an example of

by Geoffrey R. Stone

Edward H. Levi Distinguished Service Professor of Law at the University of Chicago Law School

freedom of speech on the internet is an example of

by Eugene Volokh

Gary T. Schwartz Distinguished Professor of Law; Founder and Co-Author of "The Volokh Conspiracy" at Reason Magazine

“Congress shall make no law . . .  abridging the freedom of speech, or of the press.” What does this mean today? Generally speaking, it means that the government may not jail, fine, or impose civil liability on people or organizations based on what they say or write, except in exceptional circumstances.

Although the First Amendment says “Congress,” the Supreme Court has held that speakers are protected against all government agencies and officials: federal, state, and local, and legislative, executive, or judicial. The First Amendment does not protect speakers, however, against private individuals or organizations, such as private employers, private colleges, or private landowners. The First Amendment restrains only the government.

The Supreme Court has interpreted “speech” and “press” broadly as covering not only talking, writing, and printing, but also broadcasting, using the Internet, and other forms of expression. The freedom of speech also applies to symbolic expression, such as displaying flags, burning flags, wearing armbands, burning crosses, and the like.

The Supreme Court has held that restrictions on speech because of its content —that is, when the government targets the speaker’s message—generally violate the First Amendment. Laws that prohibit people from criticizing a war, opposing abortion, or advocating high taxes are examples of unconstitutional content-based restrictions. Such laws are thought to be especially problematic because they distort public debate and contradict a basic principle of self-governance: that the government cannot be trusted to decide what ideas or information “the people” should be allowed to hear.

There are generally three situations in which the government can constitutionally restrict speech under a less demanding standard.

1. In some circumstances, the Supreme Court has held that certain types of speech are of only “low” First Amendment value, such as:

a. Defamation: False statements that damage a person’s reputations can lead to civil liability (and even to criminal punishment), especially when the speaker deliberately lied or said things they knew were likely false. New York Times v. Sullivan (1964).

b. True threats: Threats to commit a crime (for example, “I’ll kill you if you don’t give me your money”) can be punished. Watts v. United States (1969).

c. “Fighting words”: Face-to-face personal insults that are likely to lead to an immediate fight are punishable. Chaplinsky v. New Hampshire (1942). But this does not include political statements that offend others and provoke them to violence.  For example, civil rights or anti-abortion protesters cannot be silenced merely because passersby respond violently to their speech. Cox v. Louisiana (1965).

d. Obscenity: Hard-core, highly sexually explicit pornography is not protected by the First Amendment. Miller v. California (1973). In practice, however, the government rarely prosecutes online distributors of such material.

e. Child pornography: Photographs or videos involving actual children engaging in sexual conduct are punishable, because allowing such materials would create an incentive to sexually abuse children in order to produce such material. New York v. Ferber (1982).

f. Commercial advertising: Speech advertising a product or service is constitutionally protected, but not as much as other speech. For instance, the government may ban misleading commercial advertising, but it generally can’t ban misleading political speech. Virginia Pharmacy v. Virginia Citizens Council (1976).

Outside these narrow categories of “low” value speech, most other content-based restrictions on speech are presumptively unconstitutional. Even entertainment, vulgarity, “hate speech” (bigoted speech about particular races, religions, sexual orientations, and the like), blasphemy (speech that offends people’s religious sensibilities), and violent video games are protected by the First Amendment. The Supreme Court has generally been very reluctant to expand the list of “low” value categories of speech.

2. The government can restrict speech under a less demanding standard when the speaker is in a special relationship to the government. For example, the speech of government employees and of students in public schools can be restricted, even based on content, when their speech is incompatible with their status as public officials or students. A teacher in a public school, for example, can be punished for encouraging students to experiment with illegal drugs, and a government employee who has access to classified information generally can be prohibited from disclosing that information. Pickering v. Board of Education (1968).

3. The government can also restrict speech under a less demanding standard when it does so without regard to the content or message of the speech. Content-neutral restrictions, such as restrictions on noise, blocking traffic, and large signs (which can distract drivers and clutter the landscape), are generally constitutional as long as they are “reasonable.” Because such laws apply neutrally to all speakers without regard to their message, they are less threatening to the core First Amendment concern that government should not be permitted to favor some ideas over others. Turner Broadcasting System, Inc. v. FCC (1994). But not all content-neutral restrictions are viewed as reasonable; for example, a law prohibiting all demonstrations in public parks or all leafleting on public streets would violate the First Amendment. Schneider v. State (1939).

Courts have not always been this protective of free expression. In the nineteenth century, for example, courts allowed punishment of blasphemy, and during and shortly after World War I the Supreme Court held that speech tending to promote crime—such as speech condemning the military draft or praising anarchism—could be punished. Schenck v. United States (1919). Moreover, it was not until 1925 that the Supreme Court held that the First Amendment limited state and local governments, as well as the federal government. Gitlow v. New York (1925).

But starting in the 1920s, the Supreme Court began to read the First Amendment more broadly, and this trend accelerated in the 1960s. Today, the legal protection offered by the First Amendment is stronger than ever before in our history.

Three issues involving the freedom of speech are most pressing for the future.

Money, Politics, and the First Amendment

The first pressing issue concerns the regulation of money in the political process. Put simply, the question is this: To what extent, and in what circumstances, can the government constitutionally restrict political expenditures and contributions in order to “improve” the democratic process?

In its initial encounters with this question, the Supreme Court held that political expenditures and contributions are “speech” within the meaning of the First Amendment because they are intended to facilitate political expression by political candidates and others. The Court also recognized, however, that political expenditures and contributions could be regulated consistent with the First Amendment if the government could demonstrate a sufficiently important justification. In Buckley v. Valeo (1976), for example, the Court held that the government could constitutionally limit the amount that individuals could contribute to political candidates in order to reduce the risk of undue influence, and in McConnell v. Federal Election Commission (2003), the Court held that the government could constitutionally limit the amount that corporations could spend in the political process in order to influence electoral outcomes.

In more recent cases, though, in a series of five-to-four decisions, the Supreme Court has overruled McConnell and held unconstitutional most governmental efforts to regulate political expenditures and contributions. Citizens United v. Federal Election Commission (2010); McCutcheon v. Federal Election Commission (2014). As a result of these more recent decisions, almost all government efforts to limit the impact of money in the political process have been held unconstitutional, with the consequence that corporations and wealthy individuals now have an enormous impact on American politics.

Those who object to these decisions maintain that regulations of political expenditures and contributions are content-neutral restrictions of speech that should be upheld as long as the government has a sufficiently important justification. They argue that the need to prevent what they see as the corruption and distortion of American politics caused by the excessive influence of a handful of very wealthy individuals and corporations is a sufficiently important government interest to justify limits on the amount that those individuals and corporations should be permitted to spend in the electoral process.

Because these recent cases have all been five-to-four decisions, it remains to be seen whether a differently constituted set of justices in the future will adhere to the current approach, or whether they will ultimately overrule or at least narrowly construe those decisions. In many ways, this is the most fundamental First Amendment question that will confront the Supreme Court and the nation in the years to come.

The Meaning of “Low” Value Speech

The second pressing free speech issue concerns the scope of “low” value speech. In recent years, the Supreme Court has taken a narrow view of the low value concept, suggesting that, in order for a category of speech to fall within that concept, there has to have been a long history of government regulation of the category in question. This is true, for example, of such low value categories as defamation, obscenity, and threats. An important question for the future is whether the Court will adhere to this approach.

The primary justification for the Court’s insistence on a history of regulation is that this limits the discretion of the justices to pick-and-choose which categories of expression should be deemed to have only low First Amendment value. A secondary justification for the Court’s approach is that a history of regulation of a category of expression provides some basis in experience for evaluating the possible effects – and dangers – of declaring a new category of speech to have only low First Amendment value.

Why does this doctrine matter? To cite one illustration, under the Court’s current approach, so-called “hate speech” – speech that expressly denigrates individuals on the basis of such characteristics as race, religion, gender, national origin, and sexual orientation – does not constitute low value speech because it has not historically been subject to regulation. As a result, except in truly extraordinary circumstances, such expression cannot be regulated consistent with the First Amendment. Almost every other nation allows such expression to be regulated and, indeed, prohibited, on the theory that it does not further the values of free expression and is incompatible with other fundamental values of society.

Similarly, under the Court’s approach to low value speech it is unclear whether civil or criminal actions for “invasion of privacy” can be reconciled with the First Amendment. For example, can an individual be punished for distributing on the Internet “private” information about other persons without their consent? Suppose, for example, an individual posts naked photos of a former lover on the Internet. Is that speech protected by the First Amendment, or can it be restricted as a form of “low” value speech? This remains an unresolved question.

Leaks of Classified Information

The Supreme Court has held that the government cannot constitutionally prohibit the publication of classified information unless it can demonstrate that the publication or distribution of that information will cause a clear and present danger of grave harm to the national security. New York Times v. United States (The “Pentagon Papers” case) (1971). At the same time, though, the Court has held that government employees who gain access to such classified information can be restricted in their unauthorized disclosure of that information. Snepp v. United States (1980). It remains an open question, however, whether a government employee who leaks information that discloses an unconstitutional, unlawful, or unwise classified program can be punished for doing so. This issue has been raised by a number of recent incidents, including the case of Edward Snowden. At some point in the future, the Court will have to decide whether and to what extent the actions of government leakers like Edward Snowden are protected by the First Amendment.

I like Professor Stone’s list of important issues. I think speech about elections, including speech that costs money, must remain protected, whether it’s published by individuals, nonprofit corporations, labor unions, media corporations, or nonmedia business corporations. (Direct contributions to candidates, as opposed to independent speech about them, can be restricted, as the Court has held.) And I think restrictions on “hate speech” should remain unconstitutional. But I agree these are likely to be heavily debated issues in the coming years. I’d like to add three more issues as well.

Professional-Client Speech

Many professionals serve their clients by speaking. Psychotherapists try to help their patients by talking with them. Doctors make diagnoses, offer predictions, and recommend treatments. Lawyers give legal advice; financial planners, financial advice. Some of these professionals also do things (such as prescribe drugs, perform surgeries, or file court documents that have legal effect). But much of what they do is speak.

Yet the law heavily regulates such speakers. It bars people from giving any legal, medical, psychiatric, or similar advice unless they first get licenses (which can take years and hundreds of thousands of dollars’ worth of education to get)—though the government couldn’t require a license for people to become journalists or authors. The law lets clients sue professionals for malpractice, arguing that the professionals’ opinions or predictions proved to be “unreasonable” and harmful, though similar lawsuits against newspapers or broadcasters would be unconstitutional.

And the law sometimes forbids or compels particular speech by these professionals. Some states ban psychiatrists from offering counseling aimed at changing young patients’ sexual orientation. Florida has restricted doctors’ questioning their patients about whether the patients own guns. Many states, hoping to persuade women not to get abortions, require doctors to say certain things or show certain things to women who are seeking abortions. The federal government has tried to punish doctors who recommend that their patients use medical marijuana (which is illegal under federal law, but which can be gotten in many states with the doctor’s recommendation).

When are these laws constitutional? Moreover, if there is a First Amendment exception that allows such regulations of professional-client speech, which professions does it cover? What about, for instance, tour guides, fortunetellers, veterinarians, or diet advisors? Courts are only beginning to confront the First Amendment implications of these sorts of restrictions, and the degree to which the government’s interest in protecting clients—and in preventing behavior that the government sees as harmful—can justify restricting professional-client speech.

Crime-Facilitating Speech

Some speech contains information that helps people commit crimes, or get away with committing crimes. Sometimes this is general information, for instance about how bombs are made, how locks can be picked, how deadly viruses can be created, how technological protections for copyrighted works can be easily evaded, or how a contract killer can get away with his crime.

Sometimes this is specific information, such as the names of crime witnesses that criminals might want to silence, the location of police officers whom criminals might want to avoid, or the names of undercover officers or CIA agents. Indeed, sometimes this can be as familiar as people flashing lights to alert drivers that a police officer is watching; people are occasionally prosecuted for this, because they are helping others get away with speeding.

Sometimes this speech is said specifically with the purpose of promoting crime—but sometimes it is said for other purposes: consider chemistry books that talk about explosives; newspaper articles that mention people’s names so the readers don’t feel anything is being concealed; or novels that accurately describe crimes just for entertainment. And sometimes it is said for political purposes, for instance when someone describes how easy it is to evade copyright law or proposed laws prohibiting 3-D printing of guns, in trying to explain why those laws need to be rejected.

Surprisingly, the Supreme Court has never explained when such speech can be restricted. The narrow incitement exception, which deals with speech that aims to persuade people to commit imminent crimes, is not a good fit for speech that, deliberately or not, informs people about how to commit crimes at some point in the future. This too is a field that the Supreme Court will likely have to address in coming decades.

“Hostile Environment Harassment” Rules

Finally, some government agencies, courts, and universities have reasoned that the government may restrict speech that sufficiently offends employees, students, or business patrons based on race, religion, sex, sexual orientation, and the like. Here’s how the theory goes: Laws ban discrimination based on such identity traits in employment, education, and public accommodations. And when speech is “severe or pervasive” enough to create a “hostile or offensive environment” based on those traits, such speech becomes a form of discrimination. Therefore, the argument goes, a wide range of speech—such as display of Confederate flags, unwanted religious proselytizing, speech sharply criticizing veterans, speech suggesting that Muslims are disloyal, display of sexually suggestive materials, sexually-themed humor, sex-based job titles (such as “foreman” or “draftsman”), and more—can lead to lawsuits.

Private employers are paying attention, and restricting such speech by their employees. Universities are enacting speech codes restricting such speech. Even speech in restaurants and other public places, whether put up by the business owner or said by patrons, can lead to liability for the owner. And this isn’t limited to offensive speech said to a particular person who doesn’t want to hear it. Even speech posted on the wall or overheard in the lunchroom can lead to liability, and would thus be suppressed by “hostile environment” law.

To be sure, private employers and business owners aren’t bound by the First Amendment, and are thus generally free to restrict such speech on their property. And even government employers and enterprises generally have broad latitude to control what is said on their property (setting aside public universities, which generally have much less such latitude). But here the government is pressuring all employers, universities, and businesses to impose speech codes, by threatening liability on those who don’t impose such codes. And that government pressure is subject to First Amendment scrutiny.

Some courts have rejected some applications of this “hostile environment” theory on First Amendment grounds; others have upheld other applications. This too is something the Supreme Court will have to consider.

Modal title

Modal body text goes here.

Share with Students

Display of banned books or censored books at Books Inc independent bookstore in Alameda, California, October 16, 2021. Pho...

Rebecca Boone, Associated Press Rebecca Boone, Associated Press

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/politics/experts-say-attacks-on-free-speech-are-rising-across-the-us

Experts say attacks on free speech are rising across the U.S.

BOISE, Idaho (AP) — In Idaho, an art exhibit was censored and teens were told they couldn’t testify in some legislative hearings. In Washington state, a lawmaker proposed a hotline so the government could track offensively biased statements, as well as hate crimes. In Florida, bloggers are fighting a bill that would force them to register with the state if they write posts criticizing public officials.

Meanwhile, bans on books and drag performances are growing increasingly common nationwide.

“We are seeing tremendous attacks on First Amendment freedoms across the country right now, at all levels of government. Censorship is proliferating, and it’s deeply troubling,” said Joe Cohn, legislative and policy director with the Foundation for Individual Rights and Expression.

“This year, we’re seeing a wave of bills targeting drag performances , where simply being gender nonconforming is enough to trigger the penalty. We’re also seeing a wave of bills regulating what can be in public or K-12 school libraries,” Cohn said. “On college campuses, we have been tracking data about attempts to get faculty members punished or even fired for speech or expression and the numbers are startling — it’s the highest rate that we’ve seen in our 20 years of existence.”

First Amendment rights had been stable in America for decades, said Ken Paulson, director of the  Free Speech Center  at Middle Tennessee State University, but in recent years many states have reverted to the anti-speech tactics employed by people like Sen. Joe McCarthy during the “Red Scare” of the early 1950s.

WATCH: Librarians in Louisiana at odds with conservative activists working to ban books

McCarthy and others tried to silence political opponents by accusing them of being communists or socialists, using fear and public accusations to suppress basic free speech rights. The term “McCarthyism” became synonymous with baseless attacks on free expression, and the U.S. Supreme Court has referred to the phenomena in several First Amendment-related rulings.

“We are seeing a concerted wave that we have not seen in decades,” said Paulson, highlighting states like Florida where Republican Gov. Ron DeSantis has pushed for legislation that would criminalize drag shows, limit what pronouns teachers can use for students, allow parents to determine what books can be in libraries and block some history classes entirely.

“It’s pretty mind-boggling that so many politicians are waving the flag of freedom while doing anything they possibly can to infringe on the free speech rights of Americans,” Paulson said.

Still, no one political group has a monopoly on censorship — aggression is increasing across the spectrum, Cohn said.

Washington state’s  bias hotline bill , which died in committee earlier this year, was sponsored by Democratic Sen. Javier Valdez and backed by several groups including the Anti-Defamation League, Urban League, Council on American-Islamic Relations and others. It aimed to help the state collect information about hate crimes and bias incidents and to provide support and compensation to victims at a time when  hate crime reports  are rising.

Opponents, including the Foundation for Individual Rights and Expression, said they feared it would chill protected speech because it encompasses both criminal behavior and offensively biased statements.

Hate speech can be damaging and repugnant, but is still generally protected by the First Amendment. The Department of Homeland Security and experts who study extremism have warned that hateful rhetoric can be seen as a call to action by extremists groups.

READ MORE: Arizona’s conservative superintendent sets up critical race theory hotline

Oregon created a similar bias hotline in 2019. It received nearly 1,700 calls in 2021, with nearly 60 percent of the reported incidents falling short of criminal standards, according to an annual report  from Oregon Attorney General Ellen Rosenblum’s office.

“People in power target their political adversaries, so who is being silenced really depends on where you are on the map and its individual context,” Cohn said.

Artist Katrina Majkut experienced that first-hand last week, when artworks she had shown in more than two dozen states over the past decade were unexpectedly censored at a small state school in Lewiston, Idaho.

Majkut uses embroidery to highlight and subvert historically narrow ideas of wifedom and motherhood. She was hired to curate an exhibit at Lewis-Clark State College focusing on health care issues like chronic illness, pregnancy and gun violence.

But March 2, a day before the show’s opening, Majkut and two other artists were told some of their work would be removed over administrator fears about running afoul of Idaho’s “No Public Funds for Abortion Act.”

The 2021 law bars state-funded entities from promoting abortion or taking other measures that could be seen as training or counseling someone in favor of abortion.

Majkut’s  cross-stitch depicting misoprostol and mifepristone tablets  — which can be used together to induce abortion early in pregnancy — was removed from the exhibit along with a wall plaque detailing Idaho’s abortion laws.

Four documentary video and audio works by artist Lydia Nobles that showed women talking about their own experiences with abortion were also removed. And part of artist Michelle Harney’s series of 1920s-era letters written to Planned Parenthood founder Margaret Sanger were stricken from the show.

“To be censored like that is shocking and surreal,” said Majkut, who designs her art to be educational rather than confrontational. “If the most even-keeled, bipartisan artwork around this topic is censored, then everything is going to be censored.”

READ MORE: Florida Republicans advance bills on gender identity, defamation

Logan Fowler, the spokesman for LCSC, said the school made the decision after consulting with attorneys about whether showing the art could violate the law. Republican Rep. Bruce Skaug, the author of the law, said Tuesday that it was not intended to “prevent open discussion” of abortion — only to prevent tax dollars from being used to promote it.

The art exhibit censorship comes just two months after another controversial decision by Skaug. As chairman of the Idaho House Judiciary and Rules Committee, Skaug announced in January that people under age 18 would not be allowed to testify in his committee. Another Republican committee chair soon followed suit.

Lawmakers have the ability to limit committee testimony, and often use those limits to keep the legislature’s work focused and timely. Still, the age-based speech restriction appeared to be a first for the state.

A group of teens took action, launching phone and email campaigns staging protests.

“There is a clear lack of foresight in politicians who seek to eliminate the voices of those who will one day elect and eventually supersede them,” a group of 32 high school student leaders wrote in a joint  opinion piece sent to news outlets  across the state. “We ask Idaho’s Republican leaders, what are you so afraid of?”

The lawmakers eventually modified their rules, allowing youth to testify as long as they have signed permission slips from a parent or guardian.

Skaug said the rule was necessary to ensure parents are aware if their kids are leaving school to testify at the Statehouse. He still intends to give priority to older residents when testimony time is limited, but said he’s not aware of any youth actually being denied the chance to testify so far this year.

For Cohn, the efforts in Idaho and elsewhere reflect the danger of trying to restrict the expression of people who hold opposing views.

“We have to be ever-vigilant if we want our culture of individual freedoms to prevail,” he said. “Bad ideas are better dealt with through debate and dialogue than government censorship.”

Support Provided By: Learn more

Educate your inbox

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

freedom of speech on the internet is an example of

College Board releases African American Studies course framework after DeSantis criticism

Nation Feb 01

NBC4 Washington

‘Do something': Read and watch Michelle Obama's speech to the Democratic National Convention

The former first lady made a personal and passionate speech to the dnc in chicago., published august 20, 2024 • updated on august 21, 2024 at 11:08 am.

Editor's note: The text of the speech below is as prepared. Her actual delivery may have varied.

đŸ“ș Watch News4 now: Stream NBC4 newscasts for free right here, right now.

Hello Chicago! 

Something wonderfully magical is in the air, isn’t it? 

Not just here in this arena
 but spreading all across this country we love
 a familiar feeling that’s been buried too deep for too long. 

You know what I’m talking about? 

It’s the contagious power of hope! 

Decision 2024

freedom of speech on the internet is an example of

Beyoncé expected to perform at Harris rally in Houston

freedom of speech on the internet is an example of

Kamala Harris calls Trump a ‘fascist' as she argues he's ‘dangerous' and unfit for office

The anticipation
 the energy
 the exhilaration of once again being on the cusp of a brighter day. 

The chance to vanquish the demons of fear, division, and hate that have consumed us
 and continue pursuing the unfinished promise of this great nation—the dream that our parents and grandparents fought and died and sacrificed for. 

America, hope is making a comeback!

To be honest, I’m realizing that until recently, I have mourned  the dimming of that hope. 

Maybe you’ve experienced the same feelings
 a deep pit in my stomach
 a palpable sense of dread about the future. 

And for me, that mourning has been mixed with my own personal grief. 

The last time I was in Chicago was to memorialize my mother—the woman who showed me the meaning of hard work, humility, and decency
 who set my moral compass high and showed me the power of my voice. 

I still feel her loss so profoundly
 I wasn’t even sure I’d be steady enough to stand before you tonight. 

But my heart compelled me to be here because of the sense of duty I feel to honor her memory
 and to remind us all not to squander the sacrifices our elders made to give us a better future. 

You see, my mom, in her steady, quiet way, lived out that striving sense of hope every day of her life. 

She believed that all children — all people — have value
 that anyone can succeed if given the opportunity. 

She and my father didn’t aspire to be wealthy
 in fact, they were suspicious of those who took more than they needed. 

They understood that it wasn’t enough for their kids to thrive if everyone else around us was drowning. 

So my mother volunteered at the local school
 she always looked out for the other kids on our block. 

She was glad to do the thankless, unglamorous work that for generations, has strengthened the fabric of this nation. 

The belief that if you do unto others
 if you love thy neighbor
 if you work and scrape and sacrifice, it will pay off—if not for you, then maybe for your children or your grandchildren
 those values have been passed on through family farms and factory towns
 through tree-lined suburbs and crowded tenements
 through prayer groups and National Guard units and social studies classrooms. 

Those were the values my mother poured into me until her very last breath. 

Kamala Harris and I built our lives on those same foundational values. 

Even though our mothers grew up an ocean apart, they shared the same belief in the promise of this country. 

That’s why her mother moved here from India at 19. 

It’s why she taught Kamala about justice
 about our obligation to lift others up
 about our responsibility to give more than we take. 

She’d often tell her daughter, “Don’t sit around and complain about things—do something!”

So with that voice in her head, Kamala went out and worked hard in school, graduating from an HBCU
 earning her law degree at a state school
 and then she went on to work for the people. 

Fighting to hold lawbreakers accountable and strengthen the rule of law
 fighting to get folks better wages
 cheaper prescription drugs
 a good education
 decent health care, childcare, and elder care. 

From a middle-class household, she worked her way up to become Vice President of the United States of America. 

Kamala Harris is more than ready for this moment. 

She is one of the most qualified people ever to seek the office of the presidency
 and she is one of the most dignified—a tribute to her mother, to my mother, and probably to your mother too
 the embodiment of the stories we tell ourselves about this country. 

Her story is your story
 it’s my story
 it’s the story of the vast majority of Americans trying to build a better life. 

Kamala knows, like we do, that regardless of where you come from, what you look like, who you love, how you worship, or what’s in your bank account
 we all deserve the opportunity to build a decent life
 all of our contributions deserve to be accepted and valued. 

Because no one has a monopoly on what it means to be an American
 no one!

Kamala has shown her allegiance to this nation, not by spewing anger and bitterness, but by living a life of service and always pushing the doors of opportunity open for others. 

She understands that most of us will never be afforded the grace of failing forward
 we will never benefit from the affirmative action of generational wealth. 

If we bankrupt a business
 or choke in a crisis, we don’t get a second, third, or fourth chance. 

If things don’t go our way, we don’t have the luxury of whining or cheating others to get further ahead
 we don’t get to change the rules so we always win. 

If we see a mountain in front of us, we don’t expect there to be an escalator waiting to take us to the top. 

No, we put our heads down. We get to work. In America, we do something. 

And throughout her entire life, that’s exactly what we’ve seen from Kamala Harris: the steel of her spine
 the steadiness of her upbringing
 the honesty of her example
 and yes, the joy of her laughter and her light. 

It couldn’t be more obvious
 of the two major candidates in this race, only Kamala Harris truly understands the unseen labor and unwavering commitment that has always made America great.

Unfortunately, we know what comes next
 we know folks are going to do everything they can to distort her truth. 

My husband and I, sadly, know a little something about this. 

For years, Donald Trump did everything in his power to try to make people fear us. 

His limited and narrow view of the world made him feel threatened by the existence of two hardworking, highly educated, successful people who also happened to be Black. 

Who’s going to tell him that the job he’s currently seeking might just be one of those “Black jobs”?  

It’s his same old con: doubling down on ugly, misogynistic, racist lies as a substitute for real ideas and solutions that will actually make people’s lives better. 

You see, gutting our health care
 taking away our freedom to control our bodies
 the freedom to become a mother through IVF, like I did—those things are not going to improve the health outcomes of our wives, mothers, and daughters. 

Shutting down the Department of Education
 banning our books—none of that will prepare our kids for the future. 

Demonizing our children for being who they are and loving who they love—that doesn’t make anybody’s life better. 

Instead, it only makes us small. 

And let me tell you
 going small is never the answer. 

Going small is the opposite of what we teach our children. 

Going small is petty
 it’s unhealthy
 and quite frankly, it’s unpresidential. 

Why would we accept this from anyone seeking our highest office? 

Why would we normalize this type of backward leadership? 

Doing so only demeans and cheapens our politics
 it only serves to further discourage good, big-hearted people from wanting to get involved at all. 

America, our parents taught us better than that
 and we deserve so much better than that. 

That’s why we must do everything in our power to elect two of those good, big-hearted people
 there is no other choice than Kamala Harris and Tim Walz!

But as we embrace this renewed sense of hope, let us not forget the despair we have felt
let us not forget what we are up against.

Yes, Kamala and Tim are doing great right now
 they’re packing arenas across the country
 folks are energized
 we’re feeling good. 

But there are still so many people who are desperate for a different outcome
 who are ready to question and criticize every move Kamala makes
 who are eager to spread those lies
 who don’t want to vote for a woman
 who will continue to prioritize building their wealth over ensuring everyone has enough.

No matter how good we feel tonight or tomorrow or the next day, this is still going to be an uphill battle
 so we cannot be our own worst enemies. 

No, the minute something goes wrong
 the minute a lie takes hold, we cannot start wringing our hands. 

We cannot get a Goldilocks complex about whether everything is just right. 

We cannot indulge our anxieties about whether this country will elect someone like Kamala instead of doing everything we can to get someone like Kamala elected.

Kamala and Tim have lived amazing lives
 I am confident they will lead with compassion, inclusion, and grace. 

But they are still only human. They are not perfect. And like all of us, they will make mistakes. 

But luckily, this is not just on them. 

No, this is up to us—all of us—to be the solution we seek
 it is up to all of us to be the antidote to all the darkness and division. 

I don’t care how you identify politically
 whether you’re a Democrat, Republican, independent, or none of the above
 this is our time to stand up for what we know in our hearts is right. 

To stand up not just for our basic freedoms but for decency and humanity
 for basic respect, dignity, and empathy
 for the values at the very foundation of this democracy. 

It’s up to us to remember what Kamala’s mother told her: Don’t just sit around and complain — do something! 

So if they lie about her, and they will, we’ve got to do something! 

If we see a bad poll, and we will, we’ve got to put down that phone and do something! 

If we start feeling tired
 if we start feeling that dread creeping back in
 we’ve got to pick ourselves up, throw water on our faces, and do something!  

We have only two and a half months to get this done
 only 11 weeks to make sure every single person we know is registered and has a voting plan. 

So we cannot afford for anyone to sit on their hands and wait to be called upon
 don’t complain if no one from the campaign has specifically reached out to ask for your support
 there is simply no time for that kind of foolishness.

You know what we need to do.

So consider this to be your official ask: Michelle Obama is asking you to do something!

Because this is going to be close. 

In some states, just a handful of votes in every precinct could decide the winner. 

So we need to vote in numbers that erase any doubt
 we need to overwhelm any effort to suppress us. 

Our fate is in our hands. 

In 77 days, we have the power to turn our country away from the fear, division, and smallness of the past. 

We have the power to marry our hope with our action. 

We have the power to pay forward the love, sweat, and sacrifice of our mothers and fathers and all those who came before us. 

We did it before and we sure can do it again. 

Let us work like our lives depend on it
 

Let us keep moving our country forward and go higher — yes, higher — than we’ve ever gone before
 

As we elect the next President and Vice President of the United States, Kamala Harris and Tim Walz!

This article tagged under:

freedom of speech on the internet is an example of

IMAGES

  1. Freedom of Speech on the Internet

    freedom of speech on the internet is an example of

  2. Freedom of Speech vs. Censorship on the Internet Essay Example

    freedom of speech on the internet is an example of

  3. We The People, Have The Right To Freedom of Speech on The Internet

    freedom of speech on the internet is an example of

  4. FREEDOM OF SPEECH ON THE INTERNET

    freedom of speech on the internet is an example of

  5. Freedom of Speech & The Internet by dfewf dfdsfafa on Prezi

    freedom of speech on the internet is an example of

  6. freedom of speech (media, control, censorship)

    freedom of speech on the internet is an example of

VIDEO

  1. Freedom Of Speech Is Essential!

  2. Free Freedom Speech Type Beat Freedom We Need That’s All 2024

  3. The US Congress Is About To DESTROY The Internet

  4. US control over the Internet

  5. The Greatest Speech Ever That Broke the Internet đŸŒđŸ”„ #motivationalwisdom #motivation #wisdom #speech

  6. AI Censorship & Restoring Freedom of Speech on the Internet

COMMENTS

  1. What does freedom of speech mean in the internet era?

    More than a century ago, a Supreme Court justice made his own analogy: speech that doesn't merit protection is the type that creates a clear and present danger, like falsely shouting "fire" in a crowded theater. "Shouting 'fire' in a crowded theater" has since become a shopworn way to describe anything deemed to cross the free-speech ...

  2. Supreme Court tackles social media and free speech : NPR

    In a major First Amendment case, the Supreme Court heard arguments on the federal government's ability to combat what it sees as false, misleading or dangerous information online.

  3. Social Media, Freedom of Speech, and the Future of our Democracy

    The Future of Free Speech in the Social Media Era. One of the most fiercely debated issues of the current era is what to do about "bad" speech on the internet, particularly on social media platforms such as Facebook and Twitter. Problematic language, including hate speech, disinformation, and propaganda have been around throughout human history.

  4. In the Age of Social Media, Expand the Reach of the First Amendment

    For example, legal commentator Benjamin F. Jackson cogently explained in a 2014 law review article that "[P]ublic communications by users of social network websites deserve First Amendment protection because they simultaneously invoke three of the interests protected by the First Amendment: freedom of speech, freedom of the press, and freedom ...

  5. The Supreme Court Will Set an Important Precedent for Free Speech

    As the Supreme Court explained in striking down the law, a government-mandated "right of access inescapably dampens the vigor and limits the variety of public debate.". The Supreme Court's established precedent for protecting editorial discretion applies to online platforms as well. Private speech on the internet should receive at least ...

  6. PDF Online Speech and the First Amendment: Ten Principles from the Supreme

    1. The First Amendment's protections apply to online speech as much as to offline speech. The First Amendment provides that "Congress shall make no law . . . prohibiting the freedom of speech." This core principle applies whether the speech in question is shared in a public square or on the internet. As the Supreme

  7. What you need to know about Section 230, the 'most ...

    Section 230 is generally considered to be speech-protective, meaning that it allows for more content rather than less on internet platforms. That objective was baked into the law. That objective ...

  8. The Internet and Social Media

    The freedom of Internet speech matters deeply, at a normative level, because the Internet offers unprecedented opportunities for realizing the social benefits of free speech. 8 The United Nations has passed a resolution that calls on member states to protect the right to access and disseminate information on the Internet. 9 The US Supreme Court ...

  9. Freedom of expression in the Digital Age: Internet Censorship

    Introduction. Internet is regarded as an important issue that shapes free expression in today's volatile nature of human rights world (Momen 2020). In the digital age, authoritarian governments in the world always attempt to undermine political and social movement through the complete shutdown of the Internet or providing partial access to it.

  10. Freedom of speech

    Liberalism portal. Politics portal. v. t. e. Freedom of speech is a principle that supports the freedom of an individual or a community to articulate their opinions and ideas without fear of retaliation, censorship, or legal sanction. The right to freedom of expression has been recognised as a human right in the Universal Declaration of Human ...

  11. 3 Freedom of expression and the Internet

    The Internet has opened up new possibilities for the realisation of the right to freedom of expression. This is due to the Internet's unique characteristics, including 'its speed, worldwide reach and relative anonymity'. These distinctive features have enabled individuals to use the Internet to disseminate information in 'real time', and to mobilise people.

  12. Freedom of Expression on the Internet

    The First Amendment forbids Congress to make any law "abridging the freedom of speech.". The copyright statute plainly interferes with certain kinds of speech: it prevents people from "publicly performing" or "reproducing" copyrighted material without permission.

  13. How free speech is under attack in the U.S.

    America was built on the premise of free speech, but today's news is filled with examples of limiting expression, including book bans, social media suspensions, and laws restricting classroom ...

  14. Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the

    23Gregory P. Magarian, The Internet and Social Media, in The Oxford Handbook of Freedom of Speech 350, 351-52 (Adrienne Stone & Frederick Schauer eds., 2021). 24 Wu, supra note 1, at 15. 25 Journalistic ethics include distinguishing fact from opinion, verifying the veracity of what is published, having no self-interest in the matter being ...

  15. Free speech on social media: rights and filtering I Liberties.eu

    Free speech means you have the freedom to express yourself in any way that does not take away the rights of other people. You can (and should) feel free to criticize the work your elected officials are doing. You should not feel free to hold band practice late into the night, because that could take away your neighbors' right to privacy.

  16. What Exactly Is Freedom of Speech and How Does It Apply to the Internet

    In short, freedom of speech means everyone gets to speak publicly, whether they agree or disagree with you. If you speak your mind (in a public space) and find a group of people shouting back at ...

  17. Why Is Freedom of Speech an Important Right? When, if Ever, Can It Be

    Even though the concept of freedom of speech on its face seems quite simple, in reality there are complex lines that can be drawn around what kinds of speech are protected and in what setting.

  18. Freedom of Speech and the Press

    The Supreme Court has interpreted "speech" and "press" broadly as covering not only talking, writing, and printing, but also broadcasting, using the Internet, and other forms of expression. The freedom of speech also applies to symbolic expression, such as displaying flags, burning flags, wearing armbands, burning crosses, and the like.

  19. Experts say attacks on free speech are rising across the U.S

    First Amendment experts say attacks on free speech rights are escalating across the United States. Joe Cohn with the Foundation for Individual Rights and Expression says censorship is ...

  20. Read and watch: Michelle Obama's full speech to the DNC

    Editor's note: The text of the speech below is as prepared. Her actual delivery may have varied. đŸ“ș Watch News4 now: Stream NBC4 newscasts for free right here, right now.