Listen to the article (AI powered narration)

Published on April 25, 2024

When Hillary Clinton and Clarence Thomas are in agreement, you know something is amiss. Albeit for different reasons, they’d each like to see adjustments to Section 230 of the Communications Decency Act of 1996.

This law is antiquated. Twenty-eight years ago, the Internet was in its infancy; 1996 was the heyday of AOL, Blockbuster Video, and dial-up modems.

To put things in perspective, Section 230 became law three years before Napster, eight years before Facebook, and decades before social media companies began using sophisticated algorithms to curate content for their users. In short, the Internet was an entirely different entity in 1996.

Nevertheless, due to the broad case law interpretation over the years, Section 230 effectively provides social media and search companies with de facto, absolute liability immunity over all third-party content on their platforms.

Section 230 is a non-partisan issue, garnering the ire of folks on both sides of the aisle

Essentially, Section 230 provides publishers with “safe harbor” provisions against liability for any content users post on their platforms. The relevant provision, Section 230(c)(1), which is comprised of 26 words, helped create the Internet as we know it today: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This provision is no longer adequate. At a recent House Committee on Energy and Commerce hearing, Congresswoman Anna Eshoo (D – Menlo Park, CA) acknowledges they didn’t have a crystal ball back in 1996. Eshoo says,

“We didn’t anticipate the harms to children. We didn’t anticipate how it would be exploited to spread misinformation and disinformation; interfere with our elections, and threaten the foundations of our democracy and society. And we didn’t anticipate online platforms designing their products to algorithmically amplify content despite its threats to the American people. All of this necessitates Congress to update the law.”

Many conservatives, including Clarence Thomas, take umbrage with 230(c)(2), claiming it facilitates censorship and silences conservative voices. Aside from Thomas, who has been clamoring for 230 to be modified since at least October 2020, all the other Supreme Court justices have been reluctant to touch it.

The Supreme Court has been reluctant to rule on Section 230

In May 2023, the court looked at two important cases: Twitter v. Taamneh and Gonzalez v. Google.

In Twitter v. Taamneh, the court assessed whether Twitter was liable for “aiding and abetting” ISIS terrorists. Plaintiffs argued that Twitter facilitated communication between terrorists in the lead up to a 2017 attack in Istanbul. In a majority opinion, the court said Twitter didn’t knowingly provide the terrorists with assistance; thus, the ruling upheld Section 230 and declared Twitter not liable.

Gonzalez v. Google dealt with a 2015 ISIS terrorist attack in Paris. In this case, the plaintiffs argued that Google was not merely a service provider, but also a “content creator” because Google’s algorithm directed specific content to specific users. Reluctant to rule on Section 230, the court remanded the case back to the 9th Circuit.

A closer look at 230, market forces, and other relevant issues

Although many legal pundits argue that Section 230(c)(1) and Section 230(c)(2) are consistent and compatible, I don’t see it that way.

Section 230(c)(1) gives platforms legal liability from content posted by third parties. And Section 230(c)(2)—the protection for “Good Samaritan” blocking and screening of offensive material—allows platforms to monitor their websites for harmful content without being being held liable for these monitoring efforts.

I understand that these two provisions are supposed to work in tandem. Yet, I’d argue that the existence of (c)(1)—the liability protection—disincentivizes platforms from engaging in (c)(2)—moderating content.

Why would platforms like Meta, X, Google, TikTok, and Snapchat bother spending money, time, and resources policing harmful content on their platforms if they’re not going to be held liable? They wouldn’t. And they don’t.

In fact, as long as the proliferation of harmful (often illegal) content doesn’t create a PR backlash, Big Tech companies will continue to allow this content to flourish. After all, these are ad-driven businesses that make money when there is engagement. Unfortunately, market forces encourage predominately negative content on many of these platforms.

In the House Committee on Energy and Commerce hearing, Middlebury College professor Allison Stanger identifies the main problems as being:

“The immunity from liability…and also the ad-driven business model, which means that the algorithms are optimized for engagement…human beings are most engaged when they’re outraged.”

Also, traditional media outlets are dying. More people than ever are getting their information from these platforms, and as Stanger points out, “Section 230 currently covers recommender algorithms, content moderation, and search. They’re all immune. And that’s a very sweeping mandate.”

Stanger brings up one other important issue—national security.  Bad actors, sponsored by nation-states or otherwise, are exploiting 230. Without adequate content moderation, it’s easier for bad actors to spread misinformation, recruit terrorists, and facilitate cyberattacks.

Three arguments against reforming Section 230

1. Section 230 reform would stifle free speech. We need to keep 230 fully intact as a First Amendment safeguard.

Yes, if Section 230(c)(1) were reformed, there would be more content moderation. However, this is hardly a First Amendment issue. My main concern is that we’re not seeing the removal of content that isn’t protected by the First Amendment. This includes certain types of obscenity, as well as content that causes harm or encourages illegal activity. Right now, platforms can host such content with impunity. That just doesn’t make sense.

2. Section 230 reform would stifle innovation, reduce competition, and allow Big Tech companies to become more powerful.

Not necessarily. The popular argument that start-ups don’t have the funds to monitor the user-generated content on their platforms is short-sighted. Whether or not 230 reform reduces competition would depend on how 230 is revised. For example, a “reasonable duty of care standard” could be applied, whereby an effort to monitor harmful content on a platform could be proportional to the size of that platform’s user base, financial resources, or market capitalization.

3. Removing platforms’ liability protection would inspire frivolous lawsuits.

Yes. I concede that this is plausible. Even one of Section 230’s original architects, Ron Wyden (D-OR), is against a repeal of 230 for fear that this would result in bad-faith lawsuits.

I am a proponent of Section 230 reform, not a full repeal.

The 230 solution lies in reform, not in a full repeal

Firstly, we do need reform. Platforms shouldn’t be able to continue to profit from illegal activity, such as drug sales, sex trafficking, and the circulation of child sex abuse material. Moreover, platforms cannot continue to be as actively indifferent as they’ve been to date.

When it comes to what exactly the 230 reform should look like, some folks have suggested carve-outs for certain content, and exceptions for smaller-sized companies. These are bad ideas.

I agree with George Washington University law professor Mary Anne Franks who believes a solution lies in clarifications and amendments to (c)(1).

Another solution is to tie the platforms’ safe harbor provisions directly to their content moderation policies. Platform immunity could be contingent upon the demonstration of a reasonable duty of care.

Right now, due to existing case law, many judges feel as if their hands are tied. However, there have been some developments. The 9th Circuit Court of Appeals’ Section 230 jurisprudence has been evolving over the years. And in July 2021, the Texas Supreme Court ruled Facebook was not protected by 230 in a sex-trafficking case.

Recently, some plaintiffs have also been able to get around 230 by claiming that it is the platforms’ design choices that are causing the harm. By focusing on the design features of the platform—as opposed to individual users’ messages on the platform—some lawyers have been able to skirt the 230 issue.

Key takeaways

As of today, social media and search companies operate with an absolute profit motive and little to no accountability. As the law is currently written, Section 230 of the Communications Decency Act of 1996 allows platforms to remain deliberately indifferent to the harms they are causing.

Reform is necessary, and it should occur within Section 230(c)(1); there need to be limitations to platforms’ immunity protections, and these limitations need to be explicitly spelled out.

In the House Committee on Energy and Commerce hearing, Franks stresses the need to limit media platforms’ blanket liability protections. She explains,

“[There needs to be] limiting, clearly, within (c)(1)…that this is not going to provide immunity for platforms and intermediaries that are soliciting, encouraging, profiting from, or being deliberately indifferent to harmful content. I think that’s the [needed] limitation. Something along those lines is needed for (c)(1).”

I agree. Congress needs to pass legislation to place limits on Section 230(c)(1) to account for the modern-day Internet. If a search or social media platform is facilitating illegal content (e.g., terrorist activity, CSAM, drug sales, sex trafficking), then that platform shouldn’t be able to hide behind Section 230.

Connecting terrorists and sex offenders with one another isn’t publishing; why should this behavior be protected as free speech? It shouldn’t. It’s time to reform Section 230.

John Donegan

John Donegan

Enterprise Analyst, ManageEngine

John is an Enterprise Analyst at ManageEngine. He covers infosec, cybersecurity, and public policy, addressing technology-related issues and their impact on business. Over the past fifteen years, John has worked at tech start-ups, as well as B2B and B2C enterprises. He has presented his research at five international conferences, and he has publications from Indiana University Press, Intellect Books, and dozens of other outlets. John holds a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. from the University of Texas at Austin, and an M.A. from Boston University.

 Learn more about John Donegan
x