How Big Tech gets away with censorship: 5 facts about §230

As FreedomWorks takes a new position on §230, to date, we have published two pieces that provide important context to this discussion. The first detailed why the statute was passed, the short congressional debates, and what the statute says in plain English. The second detailed how courts have misconstrued the statutory text that has led us to the position we find ourselves in today. Before our final post, which will detail solutions and possible ways forward, this post will respond to several arguments we have seen in response to our first two posts.

Summary

  • Under the state action doctrine, there are three situations when private parties can become state (government) actors and violate an individual’s constitutional rights. While §230 provides online platforms immunity in certain contexts, litigants need concrete evidence to show platforms are working with the government to prohibit speech in violation of the First Amendment. Without it, the state action doctrine does not apply, and platforms cannot violate the First Amendment.
  • §230 does not provide online platforms First Amendment rights. First, they enjoyed First Amendment rights both before §230 and today. Second, §230(c)(1) protects platforms from being treated as a publisher or speaker, which is not a constitutional right. Third, when Congress wants to enforce the Constitution, here, it must act pursuant to its §5 power under the Fourteenth Amendment, which it did not in passing §230. Rather, §230 was passed pursuant to Congress’ Commerce Clause power that cannot be used to enforce constitutional rights.
  • The best reading of “otherwise objectionable” in §230(c)(2)(A) does not allow online platforms to take down third-party speech for whatever reason they want. Rather, it should be interpreted under the statutory interpretation canon of ejusdem generis to reach only material covered by the Communications Decency Act, of which §230 is a part. In Enigma Software v. Malwarebytes, the 9th Circuit rejected reading “otherwise objectionable” under both ejusdem generis and the boundless interpretation that covers anything the platforms wants.
  • Under current judicial interpretations of §230, online platforms can knowingly leave up as much illegal content on their platform as they want and still enjoy the protection of §230(c)(1). Both Judge Frank Easterbrook and Justice Clarence Thomas have questioned this interpretation. According to Justice Thomas, the statute’s correct interpretation reads “if a company unknowingly leaves up illegal third-party content, it is protected from publisher liability by §230(c)(1); and if it takes down certain third-party content in good faith, it is protected by §230(c)(2)(A).”
  • Under the best textualist (reading a statute for its plain meaning) reading, §230 does not provide a cause of action against online platforms that illegally take down third-party content. While the Supreme Court previously inferred a cause of action when statutes did not provide one, it largely does not anymore.

1. Without more, §230 does not transform private online platforms into state (government) actors.

Some argue that when online platforms (platforms) censor or ban speech, they violate the First Amendment’s protection of freedom of speech. But this overtly simplistic argument is rarely asserted because for the First Amendment to apply, state action is required. In other words, there must be some form of government involvement to bring a First Amendment claim. If there isn’t, the claim is dismissed.[^1] For example, if Facebook deactivates my account because the sole basis of my page is to convince users to sell Facebook stock, there’s no state action. Compare that to Facebook censoring or banning speech critical of Attorney General Merrick Garland because he told Facebook it would face repercussions if it didn’t.

The stronger legal argument is that §230’s wide grant of immunity plus government working with or encouraging platforms to ban or censor speech might lead to a First Amendment violation. Doctrinally, this makes sense. If the government is constitutionally prohibited from taking an action, it shouldn’t be able to immunize private parties to perform that action for it. For example, if the government cannot establish the probable cause necessary to search someone’s home, it cannot hire a private party, give it complete immunity, and instruct the private party to search the home instead. As the Supreme Court stated in Norwood v. Harrison (1973), “it is…axiomatic that a state may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.”

The Supreme Court has recognized three situations that turn private parties into state actors under the state action doctrine. “[A] private entity can qualify as a state actor in a few limited circumstances— including, for example, (i) when the private entity performs a traditional, exclusive public function… (ii) when the government compels the private entity to take a particular action…or (iii) when the government acts jointly with the private entity…” See Manhattan Community Access Corp. v. Halleck (2019).

To fall within the first category, “the government must have traditionally and exclusively performed the function,” and “[i]t is not enough that [the]…government exercised the function in the past, or still does.” See Halleck. Very few things fall into this category—such as running elections or operating a company town. In contrast, many things have not fallen into this category such as running a sports league, operating a nursing home, providing special education, etc. In Halleck, the Court noted that “hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.” In other words, Halleck strongly suggests that platforms would likely not become state actors under the traditional, exclusive public function test.

To fall within the second category, the government must compel the private entity to take a particular action. In Skinner v. Railway Labor Executives’ Association (1989), the Court held that
the Federal Railroad Association’s (FRA) regulations that authorized railroad companies to test their employees for drugs or alcohol constituted state action. While Subpart C required railroads to ensure that covered employees provided blood and urine samples for testing by the FRA after certain events, Subpart D authorized railroads to require covered employees to submit to breath or urine tests in circumstances not addressed in Subpart C. Several features of Subpart D were a sign of government encouragement rather than a passive position toward private conduct. For example, the regulations “preempt[ed] state laws, rules, or regulations covering the same subject matter…and [were] intended to supersede any provision of a collective bargaining agreement…” It also gave “the FRA the right to receive certain biological samples and test results procured by railroads.” The railroads could not terminate the FRA’s authority under Subpart D, and if a covered employee declined an employer’s request for a test, they were withdrawn from covered service. In sum, Subpart D’s testing authorization constituted state action because the FRA removed all legal barriers to testing, the FRA had a right to see the testing results, and railroads could not bargain away their authority to perform these tests.

Jed Rubenfeld, a law professor at Yale Law School, argued that based on Skinner, §230 in some situations might make platforms state actors. Rubenfeld linked Subpart D’s immunization (preemption) from state laws to §230(c)(2)(A) immunization that allows platforms to restrict access to material for specific reasons. He argued that like the railway workers who could not decline the tests, platforms “cannot decline to submit to censorship.” Finally, he argued that like the FRA’s strong preference for testing in Skinner, the legislative history of §230 shows the “government’s strong preference for the removal of ‘offensive’ content.” In response, Alan Rozenshtein, a law professor at the University of Minnesota Law School, argued that §230 does not provide a government endorsement of content removal because §230(c)(1) allows for companies to keep up as much material as they wish and not be treated as a publisher or speaker. Further, platforms could “divest themselves of the Good Samaritan immunity by clearly stating in their terms of service that they will not censor content.” Regardless of who has the stronger legal argument, there are serious arguments on both sides based on Skinner that platforms might become state actors in certain situations.

To fall within the third category, the government must act jointly with the private entity. In Lugar v. Edmondson Oil Company (1981), the Court put forth a two-part test—”the deprivation
[of a constitutional right] must be caused by the exercise of some right or privilege created by the State…[and] [s]econd, the party charged with the deprivation must be a person who may fairly be said to be a state actor.”

In Lugar, Lugar leased a truckstop from Edmondson Oil Company (Edmondson) and fell behind on rent payments. Edmondson sued in state court and sought a writ of attachment provided for under Virginia law against Lugar that prevented him from selling any property he owned while the case was ongoing. A month after the writ of attachment was issued, a state trial judge canceled it, holding there was no statutory justification for its issuance. In response, Lugar sued Edmonson under 42 U.S.C. § 1983, claiming it worked with the government to deprive him of his property in violation of the Fourteenth Amendment’s Due Process Clause. The Court held that Lugar passed step one of the test because “[w]hile private misuse of a state statute does not describe conduct that can be attributed to the State, the procedural scheme created by the statute obviously is the product of state action.” In other words, Lugar was successful because he argued the procedure set forth in the law was unconstitutional. In contrast, if Lugar had argued that Edmonson’s misuse of the law was a Due Process violation, that would not have constituted state action because it was an incorrect interpretation of the law by Edmonson, and not attributable to the state. The Court went on to remand the case for Lugar to try to prove that Edmonson was “a person who may fairly be said to be a state actor.”

An argument based on Lugar that platforms are state actors would face difficulty at step two. At step one, §230 is a statute passed by Congress, just as the state statute at issue in Lugar was a statute passed by the Virginia legislature. Moreover, a plaintiff would argue that platforms removed or censored content because of §230, not because they misused or misinterpreted §230. At step two, a plaintiff must prove that when the platform censored or restricted content, it could be fairly said it acted as a state actor. Here, the plaintiff would need to provide evidence that the government instructed the platform to take action to restrict or censor content. But who qualifies as the government? Members of Congress speaking at committee hearings or during interviews? What about statements by the president or his press secretary? What happens if there is a conflict inside the executive branch such as Anthony Fauci telling Facebook to censor content but President Trump telling them not to? We don’t know.

Recently, President Trump and several others sued Twitter, arguing among other things, that it violated the First Amendment when it took down his account because it acted as a state actor. Instead of placing state action into one of the three doctrines articulated by the Court in Halleck, the court stated that there was “no specific formula for defining state action” and the key inquiry is “have plaintiffs plausibly alleged that Twitter was behaving as a state actor pursuant to ‘a governmental policy’ when it closed their accounts?” The court held that Twitter did not act as a state actor because the only evidence President Trump provided was “ambiguous and open-ended statements [from Democrats in Congress] to the effect that ‘we may legislate’ something unfavorable to Twitter or the social media sector” (i.e., ban or censor Trump or lose §230 immunity). Unlike other cases that entailed specific government threats (usually from the executive branch), the threats President Trump offered were of congressional Democrats debating different bill ideas. Those threats came from a branch of government not tasked with enforcing the law, but one that enjoys the power to make law and conduct wide-ranging inquiries on a subject it might legislate on such as §230 reform. While the court cited Halleck only twice, its remark that there is “no specific formula for defining state action” is odd considering that Halleck goes out of its way to provide three different situations with citations for each.

2. §230 is a grant of statutory immunity (free from liability) in certain situations; it does not provide online platforms constitutional protection.

Perhaps the most popular §230 myth is that the statute affords constitutional protections to platforms. This argument has appeal because broadly speaking, people conflate immunity with constitutional protections. For example, if you are sued for a speech-related crime and raise the First Amendment as a defense and win, you cannot be held liable. See Texas v. Johnson (1989). But the problem with conflating these distinct concepts is obvious—it presupposes that platforms did not have constitutional protections before §230 was enacted. That argument is wrong. They enjoyed First Amendment protections before §230 was enacted and today. While the reach of those protections is still up for debate, no one argues that they don’t have them.

Unfortunately, the argument gets even worse when one reads the text of 47 U.S.C. §230. First, §230(c)(1) protects platforms from being treated as a publisher or speaker. But publishers, like newspapers, can be liable for third-party speech. Under tort law, they are liable in the same way the speaker is (strict liability). See Restatement (Second) of Torts § 578 (1977). Take an obvious example. In New York Times v. Sullivan (1964), the New York Times was sued for libel for publishing an advertisement that L.B. Sullivan believed was false. While the Court held Sullivan (and all public figures) needed to show actual malice to comply with the First Amendment when they sue for defamation, the Court did not dismiss the case because the New York Times enjoyed a First Amendment right to be free from publisher liability. That’s because publishers do not have a First Amendment right to be free from publisher liability.

§230(c)(1) was enacted to overturn Stratton Oakmont v. Prodigy, which held that Prodigy could be liable as a publisher because it held itself out to the public as a family-friendly forum that enforced serious content moderation policies. No one argued that Stratton Oakmont was wrong because it was unconstitutional to hold Prodigy liable as a publisher. Rather, they argued that the effects of Stratton Oakmont were problematic because online platforms had a Hobson’s choice—closely monitor your website and be strictly liable for third-party content, or keep an arm’s length, allow filthy and disgusting material on your website, and be held liable only if you know or have reason to know about illegal third-party content (distributor liability). While §230(c)(1) was a perfectly logical response to this situation, that does not make it a constitutional protection.

In response to this attack, some defenders point to the 4th Circuit’s decision in Zeran v. America Online that interpreted “publisher” under §230(c)(1) to prohibit “lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone, or alter content…” The argument follows that Zeran’s interpretation of “publisher” gives online platforms the constitutional protections of a publisher. And because Zeran was an interpretation of §230, it must follow that the statute does in fact grant constitutional protections. While Zeran’s interpretation of the term “publisher” is highly debatable, all the court did was interpret the statute. It defined “publisher” broadly with what it thought to be consistent with the statute’s purpose. So, if the New York Times enjoys certain protections because of its status as a publisher, it might make sense to define “publisher” with those protections in mind. But Zeran did not say online platforms are publishers, and §230 does not say that either. Rather, §230 simply says they cannot be treated as publishers. The legal status of these online platforms is still up for debate, and §230 does not resolve it.

To bolster the argument above, defenders claim that §230(c)(2)(A) seals the deal because publishers have a right to take down content, and the statute gives them that right. While traditional publishers do enjoy that right, as stated above, the statute does not turn online platforms into publishers; it prevents them from being treated as publishers. Second, the text does not even purport to provide the full right. At a bare minimum, it requires online platforms to act in a voluntary and good-faith manner before restricting access to third-party content. While voluntarily is not problematic, good faith is. The argument that publishers can exercise their right to take down content only if they act in good faith is weak in theory and ridiculous in practice. Under it, the New York Times could be liable if it decided not to run an op-ed because it has personal disdain for the author but lies to the author and provides a perfectly legitimate reason for its decision.

To take it a step further, the text then requires online platforms or the user to consider the content “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable…” before it can restrict access to it. So, not only must publishers act in good faith when they take down content, but the reasons they can do so are severely restricted? That doesn’t make any sense, and it’s obviously inconsistent with the scope of the rights of publishers. Some argue “otherwise objectionable” saves this argument because it should be interpreted to allow online platforms to remove content for whatever reason they want. But as will be detailed below (see #3), “otherwise objectionable” does not mean whatever the online platform wants.

Worse yet, the argument faces a very problematic third defect. The Communications Decency Act of 1996 from which §230 originates was passed under Congress’ Commerce Clause power. Congress does not and cannot grant constitutional protections via its Commerce Clause power. See Seminole Tribe of Florida v. Florida (1995). While Congress can enforce the substance of the First Amendment, it would need to act pursuant to its §5 power under the Fourteenth Amendment. Because the First Amendment is incorporated via the Fourteenth Amendment’s Due Process Clause, Congress can enforce the substance of the amendment under its §5 power. But when Congress acts under this power, the legislation must have a “congruence and proportionality between the injury to be prevented or remedied and the means adopted to that end. Lacking such a connection, legislation may become substantive in operation and effect.” See City of Boerne v. Flores (1997). In other words, the legislation must be remedial and preventative; it cannot redefine the scope of the right.

For example, in Employment Division v. Smith (1990), the Court held that the First Amendment’s Free Exercise Clause does not provide protection against laws that are neutral and generally applicable. In response, Congress passed the Religious Freedom Restoration Act (RFRA) that redefined the scope of the right to provide protection against laws that are neutral and generally applicable. In other words, Congress told the Supreme Court that its interpretation of the First Amendment was wrong and redefined the scope of the right. In City of Boerne v. Flores (1997), the Court held that RFRA was unconstitutional as applied to state and local governments because RFRA redefined the scope of the Free Exercise Clause: it was not preventive and remedial in nature. Importantly, RFRA’s protection still applies against the federal government because Congress can require the federal government to meet a higher burden (legislating against itself), but Congress must pass the congruence and proportionality standard when it wants to provide constitutional protections against state and local governments. Legislation that meets the Fourteenth Amendment’s congruence and proportionality test also allows private parties to sue state governments for money damages, something that otherwise would be prohibited under the 11th Amendment. See Board of Trustees of University of Alabama v. Garrett (2001).

As in City of Boerne, if the Communications Decency Act of 1996 was passed pursuant to Congress’ §5 power under the Fourteenth Amendment, the legislation would be unconstitutional because it would redefine the scope of the First Amendment. As stated above, the First Amendment does not provide a right to be free from publisher liability. Therefore, Congress cannot pass legislation granting that protection. Nevertheless, even if an argument could be made that the legislation is remedial and preventive, it would fail congruence and proportionality because it applies everywhere, and the Congressional findings did not identify any unconstitutional actions that this legislation would remedy. Again, no one argued that Stratton Oakmont was wrongly decided because it misinterpreted previous First Amendment decisions.

For those that are not swayed by any of the arguments above, consider two final arguments. First, Eric Goldman, an extremely pro-§230 law professor at Santa Clara University School of Law, has written that “[t]he First Amendment and Section 230 are not substitutes for each other” and “Section 230 substantively protects more speech than the First Amendment…” If Congress did in fact give online platforms constitutional protection in passing §230, why does it “protect more speech than the First Amendment”? And second, while the §230 caselaw is extremely broad, none of those decisions rested on the fact that §230 is a constitutional protection for online platforms. If it is, why did all of these courts miss the argument?

3. §230(c)(2)(A)’s use of “otherwise objectionable” is not a catchall that allows online platforms to restrict access to content for whatever reason they want.

§230(c)(2)(A) states that “[n]o provider or user of an interactive computer service shall be held liable on account of— any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”

This myth builds largely off the argument above that because §230 provides constitutional protections for platforms, “otherwise objectionable” must allow them to remove material for whatever reason they want. As stated above, the argument that §230 provides constitutional protection is wrong. It follows that defining “otherwise objectionable” to mean anything the platform wants is also wrong. Rather, “otherwise objectionable” must be interpreted based on the well-established canons of statutory interpretation.

The canon most applicable to “otherwise objectionable” is ejusdem generis. Under this canon, “where general words follow an enumeration of two or more things, they apply only to persons or things of the same general kind of class specifically mentioned.” See A. Scalia & B. Garner, Reading Law: The Interpretation of Legal Texts 199 (2012). There are two main reasons for this canon. First, “[w]hen the initial terms all belong to an obvious and readily identifiable genus, one presumes that the speaker or writer has that category in mind for the entire passage.” Id. Second, when the general term is given a broad application, “it renders the prior enumeration superfluous.” Id. at 199-200. In other words, the drafter made an intentional choice to use specific words before the general phrase. Those specific words must be given their proper meaning. However, by interpreting the general phrase to encompass everything, the interpreter quite literally reads out the specific words and rejects the limits the drafter placed on the statute.

To take one example, imagine a list of athletes that includes Hank Aaron, Nolan Ryan, Derek Jeter, or other greats. It would be improper to interpret “other greats” to reach athletes from sports other than baseball. These three athletes were all great baseball players. While they might have several things in common (they are all in the Baseball Hall of Fame), the one most obvious thing is they were all professional baseball players. It follows that “other greats” would not reach Pete Sampras, Peyton Manning, or Michael Jordan. If it did, the one obvious thing the three specific athletes have in common would be rendered meaningless, and there would be no reason for the drafter to choose those specific names.

The canon is not without its faults. Courts often struggle as to how broadly or narrowly to define the class delineated by the specific items. Courts have broad authority to determine how much or how little is embraced by the general term. In other words, the canon does not say the court must identify the common term at its lowest level of generality. Rather, it gives courts flexibility to find the common theme amongst the group of terms. Justice Scalia suggested the way to find the common theme is to “[c]onsider the listed elements, as well as the broad term at the end, and ask what category would come into the reasonable person’s mind…[o]ften the evident purpose of the provision makes the choice clear.” Id. at 208.

In using the ejusdem generis canon to interpret “otherwise objectionable,” law professors Adam Candeub and Eugene Volokh believe that it “should be read as limited to material that is likewise covered by the CDA” [Communications Decency Act]. They argue that “otherwise objectionable” should encompass “speech that entices children into sexual conduct,” ‘restrictions on anonymous speech said with intent to threaten” because the CDA encompasses both situations. However, the CDA does not encompass–and “otherwise objectionable” therefore should not reach–“speech on ‘the basis of its political or religious content’—restrictions expressly eschewed by CDA § 551…”According to Candeub and Volokh, the common theme that the preceding adjectives before “otherwise objectionable refer to is “speech regulated in the very same [t]itle of the [a]ct, because they all had historically been seen by Congress as regulable when distributed via electronic communications…the terms [did not] appear in the CDA by happenstance; rather, they all referred to material that had long been seen by Congress as of 1996 as objectionable and regulable within telecommunications media…”

Even if one doesn’t agree with using ejusdem generis, several courts have already explicitly rejected the argument that “otherwise objectionable” means anything the platform wants. Most notably, in Enigma Software v. Malwarebytes (9th Cir. 2019), the court rejected reading “otherwise objectionable” to reach anticompetitive conduct. Malwarebytes argued that “otherwise objectionable” reached anything it wanted; in this case, its anticompetitive conduct against its direct rival Enigma Software. In response, the court said that “[w]e cannot accept Malwarebytes’s position, as it appears contrary to CDA’s history and purpose.” Other courts have also rejected this boundless reading of “otherwise objectionable”: see e-ventures Worldwide, LLC v. Google, Inc. (M.D. Fla. 2016); Song Fi Inc. v. Google, Inc. (N.D. Cal. 2015); Sherman v. Yahoo! Inc. (S.D. Cal. 2014); Holomaxx Technologies v. Microsoft Corporation (N.D. Cal. 2011).

Readers of Enigma Software will correctly note that the court also declined to take the ejusdem generis interpretation of “otherwise objectionable” because “the specific categories listed in § 230(c)(2) vary greatly” and “[i]f the enumerated categories are not similar, they provide little or no assistance in interpreting the more general category.” In response, Candeub and Volokh argued that the court “missed the link… [v]iolent, harassing, and lewd material is indeed similar, in that it had long been seen—including in the rest of the Communications Decency Act, in which § 230(c)(2) was located—as regulable when said through telecommunications technologies.” Regardless of who is correct, the court’s rejection of ejusdem generis was not a thorough analysis. Bizarrely, the court went on to argue that “otherwise objectionable” should be interpreted “to encapsulate forms of unwanted online content that Congress could not identify in the 1990s.” How exactly would future courts go about discovering what Congress could not identify in the 1990’s? It is odd to argue that ejusdem generis is too difficult to apply, but what Congress could not identify 30 years ago is not. Nevertheless, the court’s rejection of ejusdem generis is likely dictum (not binding) as the argument was not needed to reach its holding.

The strongest argument in favor of the broad reading of “otherwise objectionable” might be the constitutional-doubt canon. The canon “militates against not only those interpretations that would render the statute unconstitutional but also those that would even raise serious questions of constitutionality.” Scalia & Garner, Supra at 247-48. In plain English, the cannon says that a court should intentionally interpret a statutory provision to avoid any constitutional concerns. The argument goes that “otherwise objectionable” should mean anything the platform wants because otherwise, the types of content they can restrict access to is a content-based restriction (discriminates on the basis of content) and therefore violates the First Amendment. “Content-based laws—those that target speech based on its communicative content are presumptively unconstitutional and may be justified only if the government proves that they are narrowly tailored to serve compelling state interests.” See Reed v. Town of Gilbert (2015). While this argument has appeal, no court has faced this question because much of the §230 caselaw revolves around (c)(1), not (c)(2)(A). Moreover, if under (c)(1), the platforms cannot be treated as publishers (or distributors, see Zeran) it perhaps follows that third-party content on their platforms is not their own. If that’s correct, limiting their ability to restrict access to content might not be seen as a speech restriction because they are not the ones speaking.

4. Under current caselaw (court decisions), online platforms cannot lose their protection from being treated as a publisher for third-party content under §230(c)(1) no matter how much third-party content they censor.

One of the most popular myths about §230 is that platforms with the most serious content moderation policies should be treated as publishers. Doctrinally, this makes sense and it’s what the court in Stratton Oakmont held. But Congress passed §230 almost certainly to overrule Stratton Oakmont’s theory of publisher liability. Specifically, §230(c)(1) prohibits online platforms from being treated as publishers or speakers. And §230(c)(2)(A) gives them complete immunity if they restrict access to material in a voluntary and good-faith manner if the material is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable…”

In other words, under Stratton Oakmont’s theory of publisher liability, Facebook would likely be liable for a user’s defamatory post. That’s because publishers, like newspapers and magazines, are strictly liable for third-party content that they publish. But under §230(c)(1), Facebook is prohibited from being treated as a publisher. Therefore, Facebook cannot be held liable in any claim that would hold it liable as a publisher—such as defamation. §230(c)(2)(A) then gives Facebook immunity if it decides to restrict access to certain third-party content in a good-faith manner. Furthermore, under this plain reading of §230, platforms could still be held liable as a distributor like a bookstore or library. In other words, platforms would be liable for third-party speech like defamation if they know or have reason to know of the illegality of it. As a result, platforms should not in fact be able to keep up illegal third-party content and be free from liability, but for the court’s incorrect interpretation of the statute.

The above paragraph is not remotely close to how courts have interpreted §230. The Zeran court was the first appellate court to interpret §230(c)(1) and it held that it prohibits “lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone, or alter content…” Instead of a narrow interpretation of publisher under §230(c)(1) that gives substance to §230(c)(2)(A), the court in large part read it completely out of the statute. Because Zeran protects a platform’s decision to publish or withdraw third-party content, platforms are given complete immunity for whatever they allow on their website, regardless of what their content moderation policies are or if they enforce them in an evenhanded way. Zeran also eliminated the ability to hold platforms liable as distributors, thereby allowing them to keep up illegal third-party content with no repercussions. Unfortunately, Zeran has been widely embraced across all federal appellate courts.

The one way platforms can lose their §230(c)(1) protection is if they become an information content provider because they are “responsible, in whole or in part, for the creation or development of information provided through the Internet…” See §230(f)(3). Under the statute’s plain terms, an online platform is an interactive computer service. See §230(f)(2). But, through their actions editing content, or how they set up their website, they can become information content providers. This makes sense because platforms do not receive §230(c)(1) protection for their own actions, only for the content of third parties. For example, in Fair Housing Council v. Roommates.com (9th Cir. 2008)(en banc), Roommate became an information content provider because it required users to answer discriminatory (illegal) questions with respect to their housing preferences in order to use the site. The court explained that Roommate became “much more than a passive transmitter of information provided by others; it became the developer, at least in part, of that information.” In contrast, platforms that act as passive conduits and do not materially contribute to the alleged unlawfulness retain their §230(c)(1) protection. Roommate’s problem was that it materially contributed to its users’ alleged unlawfulness because users had to answer its discriminatory questions to use the site. The material contribution test has been widely adopted and is incredibly difficult to prove. See Jones v. Dirty World Entertainment Recordings LLC (6th Cir. 2014); Force v. Facebook (2nd Cir. 2019).

Some argue that the title of section (c) “Protection for ‘Good Samaritan’ blocking and screening of offensive material” is “hardly an apt description if its principal effect is to induce ISP’s [online platforms] to do nothing about the distribution of indecent and offensive materials via their services. Why should a law designed to eliminate ISPs’ liability to the creators of offensive material end up defeating claims by the victims of tortious or criminal conduct?” See Doe v. GTE Corporation (7th Cir. 2003). This argument has appeal because the section title suggests that online platforms should have to act like good Samaritans (i.e., act in good faith) to keep §230’s protection because they receive complete immunity for restricting access to certain material under §230(c)(2)(A). It follows that if they refuse to restrict access to certain material for which they have complete immunity, why should they be afforded §230(c)(1)’s protection of not being treated as a publisher? If online platforms are not engaging in content moderation, how can it be said that they are publishers?

While Justice Thomas has not endorsed interpreting the text of section (c) for substantive effect, he has interpreted §230(c)(1) to require platforms not to know the substance of a third party’s content for it to receive protection. In agreeing with the Court not to grant a §230 case, he commented that “the statute suggests that if a company unknowingly leaves up illegal third-party content, it is protected from publisher liability by §230(c)(1); and if it takes down certain third-party content in good faith, it is protected by §230(c)(2)(A).” See Malwarebytes, Inc. v. Enigma Software Group USA, LLC (2020). This interpretation is noteworthy because it recognizes that §230(c) must be read together. First, Congress did not protect platforms from distributor liability under §230(c)(1). It follows that if the platform knows or has reason to know of the illegality of a third-party content, it has two options—take it down, or face liability. Second, if platforms are knowingly leaving up illegal content that the statute gives them complete immunity to restrict access to under §230(c)(2)(A), they are not acting in good faith. In that situation, the platforms should be held liable as distributors because they have knowledge of the illegal content. While no court has interpreted §230 based on Judge Easterbrook’s (see Doe v. GTE Corporation above) or Justice Thomas’ interpretation, both opinions provide real ways to limit the statute to its text.

5. §230 does not provide a cause of action (ability to bring a lawsuit) if an online platform takes down your content.

This fact is a tough pill to swallow, but it is the best reading of the text from a textualist perspective. If a statute does not provide a cause of action, it is likely because Congress intentionally decided not to. That decision deserves respect. In our system of separation of powers, it is not the role of federal courts to create a cause of action when Congress did not provide one.

First, the statute provides no clear cause of action. Compare that with 42 U.S.C. § 3613 that allows claimants to sue in federal or state court to enforce the substantive provisions of the Fair Housing Act. Or compare it with 42 U.S.C. § 2000e–5(e)(1) that allows claimants to sue employers engaging in unlawful employment practices in violation of the Title VII of the Civil Rights Act after they’ve exhausted their administrative remedies. Or compare that with RFRA, where Congress provided claimants a clear cause of action to sue the government if it burdens their religious exercise. See 42 U.S.C. § 2000bb–(c). Unlike these statues where Congress was specific in giving claimants the ability to sue, 47 U.S.C. § 230 has no comparable provision.

While there is some caselaw that suggests courts can create (infer) a cause of action if a statute does not provide one, that doctrine has been largely abandoned. Originally, in Cort v. Ash (1975), the Court asked several questions when deciding whether to infer a cause of action: (1) whether the plaintiff belonged to the class of persons the statute was designed to protect; (2) whether Congress intended to create or deny a private remedy; (3) whether that private remedy was consistent with the statutory scheme and/or purpose; and (4) whether the right and remedy traditionally were relegated to state law. In Cort, the Court weighed and balanced the four factors to hold there was no private cause of action for damages against corporate directors under a federal criminal statute.

After decades of back and forth based on the Cort factors, the Court abandoned that approach. Instead, in Alexander v. Sandoval (2001) it stressed that the decision to create a cause of action is with Congress and “a cause of action does not exist and courts may not create one, no matter how desirable that might be as a policy matter, or how compatible with the statute.” In rejecting a cause of action to enforce the disparate-impact regulations promulgated under Title VI, the Court’s analysis focused squarely on the text and structure of the statute. Based on Alexander, a court would likely not infer a cause of action under 47 U.S.C. § 230 because neither the text nor structure of the statute lead to that conclusion.

[^1]: [1] Originally, the Bill of Rights applied only to the federal government, not state or local governments. See Barron v. Baltimore (1833). After the Fourteenth Amendment was ratified, the Supreme Court began the process of incorporation, or requiring the protections in the Bill of Rights to apply to state and local governments under the Fourteenth Amendment’s Due Process Clause. In Gitlow v. New York (1925), the Court incorporated the First Amendment’s Freedom of Speech protection.