Big Tech’s First Amendment Argument Might Eviscerate Section 230

Last month, the 11th Circuit Court of Appeals held that several parts of Florida’s social media law, S.B. 7072, were likely unconstitutional. That sentence should give readers pause. The court’s decision did not rest on Section 230. Rather, it rested on the argument that social media platforms (platforms) enjoy a First Amendment right to censor, shadow-ban, deplatform, and moderate third-party content in any way they want. Last week, the Supreme Court blocked a similar Texas social media law from being enforced that the 5th Circuit greenlighted. While the justices that granted relief from the shadow docket did not explain their reasoning, the decision resulted in an unusual 5-4 split.

While champions of big tech are excited about the 11th Circuit’s ruling, they shouldn’t get too excited. For years they’ve wrongly argued that Section 230 codified the First Amendment. That argument presupposes that platforms enjoyed no First Amendment rights before Section 230 and the First Amendment provides a right to be free from publisher liability. While both arguments are wrong, the real question becomes, if the First Amendment purportedly protects a platform’s content moderation decisions, what’s the point of Section 230? Well, now it might be unconstitutional.

Let’s recall what Section 230 says. Under (c)(1), platforms cannot be treated like a publisher or a speaker. Under (c)(2)(A), if they want to take down content, they must act in a voluntarily and good faith manner, and either they, or the user, must consider the content “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable…”

While the bulk of the (c)(2)(A) arguments have been over the correct interpretation of “otherwise objectionable,” the provision also requires platforms to act in good faith. One provision of the Florida law that the court held was likely unconstitutional required platforms to apply their content moderation policies in a consistent manner. In other words, the court held that platforms enjoy a First Amendment right to not apply their content moderation policies in good faith. If that’s right, (c)(2)(A) is unconstitutional because it restricts the way platforms can moderate content.

If a court renders (c)(2)(A) unconstitutional, what about (c)(1)? Recall that it protects platforms from being treated like a publisher or speaker. Its purpose was to overturn the state court decision of Stratton Oakmont v. Prodigy that subjected Prodigy to publisher liability (strict liability) because it held itself out as a family friendly forum that constantly enforced its content moderation policies. Based on Stratton Oakmont, these statutory provisions work together. Under (c)(1), platforms cannot be strictly liable for third party content, and under (c)(2)(A), they have several reasons to remove content and keep their immunity if they act in good faith. For example, if a third party posts defamatory content, the platform cannot be strictly liable for it, and it has a statutory basis to remove it while keeping its immunity.

Because the provisions work together, they are likely inseverable. In other words, if a court were to hold (c)(2)(A) unconstitutional, it should follow that it also render (c)(1) unconstitutional. While one might argue with this, the Telecommunications Act of 1996 of which Section 230 originates does not contain a severability clause. Therefore, no statutory provision tells a court it must protect (c)(1) if (c)(2)(A) is unconstitutional.

The 11th Circuit’s opinion also supplies an independent basis for taking issue with (c)(1). While Florida’s law labeled large platforms common carriers (entities that hold themselves open to the public and cannot discriminate), the court held that it could not transform them into common carriers. But (c)(1) suffers from the opposite problem. If courts were to otherwise determine that platforms are in fact publishers, why can Congress determine that they are in fact not publishers and insulate them from publisher liability?

Section 230 remains a problem. On the one hand, platforms tell us that Section 230 immunizes them from all third-party content because it’s not their own, and they do not endorse any of it. On the other, they have a First Amendment right to host, moderate, and curate third party content in whatever way they want. Both of these cannot be true, and if it weren’t for Section 230, they wouldn’t be.

Alex Deise serves as Policy Counsel at FreedomWorks and is an attorney.