How Big Tech gets away with censorship: how the courts have misinterpreted §230

If Twitter one morning decides to shut down your account with millions of followers because it doesn’t like your religious views, the color of your skin, or your country of origin, does §230 prevent you from suing it? If your business competitor creates an Instagram page impersonating your business that lies about your products, and Instagram’s algorithm drives users to view the page, does §230 prevent you from suing it? If Facebook’s algorithm connects terrorists based on their shared interests in terrorism, and as a result, they join together to kill Americans, does §230 prevent the families from suing it? If in all these hypotheticals you contacted the organizations and they told you they would take action to rectify the problem, but months went by and they didn’t do anything, could you sue them then? The reasonable observer might believe §230 cannot possibly prohibit these lawsuits, but based on how courts have interpreted it, each lawsuit would be prohibited.
Although §230 was codified in February 1996, the Supreme Court has never granted a case where it had to interpret the statute. As a result, the §230 precedents are all from different appellate courts. While there are some advantages to percolation (lower courts reaching different conclusions on the same legal question before the Supreme Court weighs in), 26 years of court decisions not based on the statute’s plain meaning (nontextualist interpretations) is indefensible. Today, Justice Clarence Thomas is the only justice to go on record (twice) about how the statute should be interpreted.
Summary
- Appellate courts across the country have interpreted §230(c)(1) to create a three-part test: (1) Is the defendant (online platform) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under its claims as a publisher or speaker (3) of information provided by another information content provider (third party)? If the answer to all three questions is yes, the case is dismissed and the platform escapes liability. Questions (2) and (3) have been interpreted in an expansive way that is inconsistent with the statute.
- In Zeran, the 4th Circuit interpreted the term “publisher” in §230(c)(1) to protect platforms from any liability for “its exercise of a publisher’s traditional editorial functions – such as deciding whether to publish, withdraw, postpone, or alter content…” It also read distributor liability out of the statute. In Roommate, the 9th Circuit held that “any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.” And in Barnes, the 9th Circuit instructed that “what matters is not the name of the cause of action…what matters is whether the cause of action inherently requires the court to treat the defendant as the ‘publisher or speaker’ of content provided by another.”
- Under Zeran, Roommate, and Barnes any claim—negligence, contract, civil rights, providing material support to terrorists, anticompetitive conduct, etc. is barred if the claim attacks the platform’s decision to keep up, take down, or restrict access to content. But under this interpretation of §230(c)(1), courts have written out §230(c)(2)(A), which details why a platform can restrict access to content. An interpretation of a statute that renders a different provision superfluous is at the very least problematic, if not wrong.
- Some courts have wrongly interpreted Zeran to prevent claims against a platform’s own conduct like how to run its website instead of its decisions over third-party content. Under that reading, typical product liability or negligent design claims are prohibited. While courts have misinterpreted Zeran to prohibit these lawsuits, this cannot be squared with §230(c)(1) that requires dismissal only if the claim involves “information provided by another information content provider.” Product liability claims attack a platform’s own conduct, not third-party content.
- Zeran also protects a platform’s decision to “alter” third-party content, but that result is also problematic under the statute. An interactive computer service (online platform, see §230(f)(2)) can become an information content provider (see §230(f)(3)) if it “is responsible in whole or in part, for the creation or development of information provided through the Internet…” In other words, a platform can be liable for its altercation of third-party content in certain situations. For example, a platform cannot take a third party’s non-defamatory statement and alter it into a defamatory one.
- While the Supreme Court has rejected several §230 cases, Justice Thomas has outlined his interpretation of it twice. “[B]oth provisions in §230(c) most naturally read to protect companies when they unknowingly decline to exercise editorial functions to edit or remove third-party content, §230(c)(1), and when they decide to exercise those editorial functions in good faith, §230(c)(2)(A).” He also took issue with Zeran’s “exercise of a publisher’s traditional editorial functions” analysis, rejected reading broad immunity into §230(c)(1), and criticized the argument that platforms should be immune from product liability claims over the platform’s own conduct as opposed to third-party content.
- Because of the broad precedents under §230(c)(1), many courts have not grappled with §230(c)(2)(A). Courts that have interpreted it have uniformly rejected the argument that “otherwise objectionable” means whatever the platform wants. In Malwarebytes, however, the 9th Circuit rejected reading “otherwise objectionable” under the statutory canon of ejusdem generis that would limit its interpretation based on the specific adjectives that precede it (see below).
Before getting into the cases, a quick word about the rules of Federal Civil Procedure is important. Almost all the cases below were in the posture of a motion to dismiss under rule 12(b)(6). In that posture, the court takes as true the facts the plaintiff alleges and construes them in the light most favorable to the plaintiff. If the court finds that §230 prohibits the lawsuit, the case is dismissed. If not, the plaintiff likely survives the motion to dismiss (notwithstanding other affirmative defenses), and the case moves into discovery. In other words, these cases are not on the merits of any of the plaintiff’s allegations. They should not be construed to suggest the plaintiff had viable claims on the merits or the defendants engaged in the conduct the plaintiffs alleged. Surviving a motion to dismiss allows plaintiffs to move into discovery, find evidence to support their claims, and likely proceed to a full trial on the merits.
Plain reading of §230
Before touching on how courts have largely misinterpreted the text of 47 U.S.C. § 230, it is important to recap from the last post (see bottom) what the statute actually says.
First, under §230(c)(1), online platforms cannot be held liable (they are immune) under any claim that treats them as a publisher or speaker for third party speech. In plain English, this means that if someone is defamed on a Facebook status, he or she can sue the author of the defamatory post (and those that shared it), but not Facebook. The statute prohibits treatment as a speaker or publisher only. It does not immunize platforms from their own negligence, distributor liability, anticompetitive conduct, etc. In addition to the text, the legislative history makes this clear. Finally, a platform cannot be treated like a publisher or speaker unless it “is responsible in whole or in part, for the creation or development of information provided through the Internet…” (See §230(f)(3)). This makes sense because it represents the platform’s own actions or content, not that of a third party.
Under §230(c)(2)(A), platforms have several reasons to restrict material from their website if they act voluntarily and in good faith. One of the reasons, “otherwise objectionable,” must be read with the conjunction with the six preceding adjectives. This canon of statutory construction is called ejusdem generis. Under it, where general words follow specific words in a statutory enumeration, the general words must be construed to embrace only objects similar in nature to those enumerated by the preceding specific words. Finally, the statute provides no cause of action (ability to sue) platforms if they remove content for a reason not provided for under §230(c)(2)(A).
Cases that have misinterpreted §230:
Zeran v. AOL (4th Cir. 1997).
- Holding: §230(c)(1) immunizes online platforms from distributor liability and any liability for “its exercise of a publisher’s traditional editorial functions–such as deciding whether to publish, withdraw, postpone, or alter content…” As a result, Zeran’s defamation claim seeking to hold AOL liable as a distributor was prohibited.
In the early days of American Online’s (AOL) bulletin boards, an anonymous user falsely advertised t-shirts with disgusting slogans related to the Oklahoma City Bombing. In the ad, the user posted Zeran’s home phone number that he used to run his business. As a result, Zeran received many calls and death threats, but he could not change his phone number because it was associated with his business. When he contacted AOL to get the post removed, it said it would remove the post, but it would not issue a retraction. The next day, another anonymous user posted a fake ad related to the bombing and told customers to call Zeran. This went on for several days. Zeran contacted AOL several more times and the posts were not removed. At this point, Zeran was receiving a threatening phone call every two minutes. Finally, a radio station in Oklahoma read the advertisement on air, which intensified the calls even more. Zeran then sued AOL, seeking to hold it liable for defamation as a distributor—i.e., because AOL knew of the defamatory content of the posts and did not take them down, it should be held liable for Zeran’s reputational injuries.
The court held §230(c)(1) barred Zeran’s lawsuit because distributor liability is “merely a subset” of publisher liability. It emphasized that publication is a necessary element for a defamation claim, and AOL acted like a publisher in hosting the alleged defamatory post. Therefore, because AOL acted as a publisher, Zeran could not transform it into a distributor by bringing a distributor liability claim. The court determined that no amount of notice of the alleged defamation could turn AOL into a distributor because the decision to alter, remove, or keep a post after notification of its problematic character is a traditional publishing function. Bizarrely, while the court agreed that publisher liability in Stratton Oakmont and distributor liability in Cubby are different, it argued that neither case rejected the argument that distributor liability is a subset of publisher liability.
The court also read the purpose of §230 broadly to strengthen its interpretation of the text. It argued distributor liability would defeat the purpose of §230 to maintain the robust nature of the internet; it would cause online platforms to strictly regulate third-party speech and make “on-the-spot editorial decisions” over whether to keep or remove a post. Therefore, platforms would have a natural inclination to take down speech or not allow its posting to avoid liability altogether rather than face the consequences of not removing speech after it has been notified of its problematic character.
Zeran was the first appellate court to interpret §230, and its holding has been widely cited. While the holding is very broad, some courts have taken the “its” in “its exercise of a publisher’s traditional editorial functions…” (see holding above) to reach a platform’s own editorial decisions or judgments. Read correctly however, the preceding sentence limits “its” to the platform’s judgment over third-party content, not the platform’s judgment over its own content or conduct. In plain English, this means that while Facebook is immune from liability arising from third-party content (i.e., Facebook cannot be held liable for a user’s defamatory post), it is not immune from its own decisions—such as discriminatory actions with its algorithm or the way its website is set up.
Zeran’s interpretation of §230(c)(1) to reach a platform’s “exercise of a publisher’s traditional editorial functions” is textually problematic. For starters, if §230(c)(1) protects a publisher’s decision to take down or postpone content, what purpose does §230(c)(2)(A) serve? To receive immunity under §230(c)(2)(A), platforms must act both voluntarily and in good faith. According to Zeran, apparently neither is required. Second, a publisher’s decision to “alter” content and receive immunity likewise cannot be squared with the text. An interactive computer service (online platform—see §230(f)(3)) can become an information content provider (see §230(f)(2)) by being responsible “in whole or in part, for the creation or development of information provided through the Internet…” In other words, by altering content, the online platform can become liable for the finished product.
Zeran’s elimination of distributor liability is likewise textually unsound. §230(c)(1) goes out of its way to prohibit treatment of online platforms as a speaker or publisher. Arguing that distributor liability is a subset of publisher liability raises the question as to what the legal difference is between being treated as a speaker or publisher–or a distributor. In a defamation claim, for example, the publisher is liable in the same way as the speaker is. In other words, the law treats publishers just like it does speakers. But §230 goes out of its way to prohibit treatment as both. So in the defamation hypothetical, speaker liability is also a subset of publisher liability, but then why does the statute protect against both? The problem with Zeran’s broad interpretation of publisher is that it would also encompass speaker liability, even though Congress went out of its way to prohibit treatment as a speaker. It follows from this that publisher liability should not be broadly interpreted to swallow distributor liability. Moreover, if Congress wanted to eliminate treatment as a distributor, it would have so specified. In addition, the court’s argument that neither Cubby nor Stratton Oakmont rejected the argument that distributor liability is a subset of publisher liability does not strengthen its case. Neither court rejected the argument because the argument wasn’t made in either case. In Cubby, after rejecting publisher liability, the court performed a distributor liability analysis. And remember, no one in Congress had an issue with Cubby; their ire was directed solely at Stratton Oakmont.
Most importantly, §230(c)(1) prohibits platforms from being treated as speakers or publishers while §230(c)(2)(A) is an affirmative grant of immunity for restricting access to material if they act voluntarily and in good faith. The Zeran court effectively re-wrote §230(c)(1) to say that “[n]o provider or user of an interactive computer service shall be [held liable on account of any exercise of a publisher’s traditional editorial functions].” But under that reading, the words “treatment” and “immunity” must mean the same thing, and §230(c)(2)(A) serves no purpose. An honest textualist analysis likely would not reach that result.
Cases dismissed based on Zeran:
- Universal Communication Systems, Inc., v. Lycos (1st. Cir. 2007).
- Holding: §230(c)(1) immunized Lycos from a fraudulent securities transactions claim based on the way it constructed and operated its website because that decision was an editorial (publishing) decision just like a decision not to delete a post. This case misinterpreted Zeran to hold that §230(c)(1) does not only provide immunity for editorial decisions based on third-party content, but also a platform’s own editorial decisions. “If the cause of action is one that would treat the service provider as the publisher of a particular posting, immunity applies… for its inherent decisions about how to treat postings generally.” Here, the claim argued the way Lycos constructed its website “contributed to the proliferation of misinformation”; but this case concerned Lycos’ decision over its own product (website). which had nothing to do with third-party content.
- Jane Doe v. MySpace (5th Cir. 2008).
- Holding: §230(c)(1) immunized MySpace from claims of negligence and gross negligence based on its failure to implement basic safety measures to protect minors because the plaintiff’s “allegations [were] merely another way of claiming that MySpace was liable for publishing the communications and they speak to MySpace’s role as a publisher of online third party-generated content.” Here, a 13-year-old girl lied about her age to say she was 18 and created a public MySpace profile. Because of her lie, she was allowed to circumvent MySpace’s safety features, her profile was made public, and a 19-year-old male was allowed to initiate contact with her. They ultimately met in person where the man sexually assaulted her. The girl’s mother sued MySpace for its failure to “implement basic safety measures to prevent sexual predators from communicating with minors on its Web site.” The court relied on Zeran and other cases to conclude the form of plaintiff’s lawsuit did not matter because the substance attacked MySpace’s decision to publish the harmful content. But the lawsuit attacked MySpace’s decision not to have age-verification tools. That decision represented My Space’s choice (its conduct) over how to run its website. It did not attack MySpace based on information provided by a third party.
- Levitt v. Yelp! (N.D. CA. 2011). Affirmed by the 9th Circuit.
- Holding: §230(c)(1) immunized Yelp! from claims alleging unfair or fraudulent business practices based on its manipulation of “review pages by removing certain reviews and publishing others or changing their order of appearance” because it was an exercise of a publisher’s traditional editorial functions. But the claim went after Yelp for its own re-arrangements of reviews in violation of what it represented to users and customers; it did not seek to hold Yelp! liable as a speaker or publisher or based on third-party content. Here, the court misinterpreted Zeran’s exercise of a publisher’s traditional editorial function language to reach a platform’s own editorial decisions (its conduct), not editorial decisions based on third-party content.
- Jane Doe No. 1 v. Backpage.Com (1st Cir. 2016).
- Holding: §230(c)(1) immunized Backpage from a claim under the Trafficking Victims Protection Reauthorization Act (TVPRA) based on the website’s design and operation because they were “editorial choices that [fell] within the purview of traditional publisher functions.” Here, Backpage allegedly allowed advertisements for three women who were minors at the time in its “Escorts” section, which led to their sex trafficking and victimization. Relying on Zeran and Lycos, the court determined that Backpage’s decision not to verify phone numbers, its rule about whether a person can post after attempting to enter a forbidden term, its process for uploading photos, its provision of email anonymization, forwarding auto-reply, and storage services for posters were publishing choices. While none of those choices have anything to do with third-party content, they all represent Backpage’s own choice about how to run its website. Moreover, the TVPRA claim did not treat Backpage as a publisher; rather, the claim attacked Backpage’s conduct over how it ran its website alleging it knowingly benefitted from participation in a venture which it knew or should have known was engaged in sex trafficking.
- Herrick v. Grindr (2nd Cir. 2019).
- Holding: §230(c)(1) immunized Grindr from product liability and negligent design claims based on its failure to implement safety features to prohibit impersonating profiles and other dangerous conduct because the claims treated it as a publisher. The court reasoned the lack of safety features were relevant only because they would make it more difficult for Herrick’s boyfriend to post impersonating profiles or make it easier for Grindr to remove them, which were publishing decisions. Here, Herrick’s ex-boyfriend used Grindr to create several fake profiles of him, communicated with other users, and directed them to Herrick’s home and workplace. The court relied on Backpage and Lycos to find Grindr’s choice not to implement safety features to prohibit harassment and impersonation to be publishing decisions. As in the above cases, however, that choice belonged to Grindr and was not based on third-party content. Herrick’s claims attacked several flaws with Grindr’s product; they did not go after Grindr for publishing Herrick’s information. The fact that Herrick’s evidence came from his situation should not have turned his allegations into a claim holding Grindr liable as a publisher for information provided by a third party.
Fair Housing Council v. Roomate.com (9th Cir. 2008) (en banc).
- Holdings: (1) “[A]ny activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.”
- (2) An interactive computer service (online platform) loses its § 230 immunity when it elicits illegal content and “makes aggressive use of it in conducting its business.”
- (3) An interactive computer service also loses its § 230 immunity if instead of augmenting the content generally (fixing typos, removing obscenity), it “contributes materially to the alleged illegality of the conduct.”
Roommate.com (Roommate) ran a website that is used to match people renting out spare rooms with people looking for a place to live. Before users can use the site, they must create a profile that requires them to provide their name, location email address, sex, sexual orientation, and whether they were bringing a child with them. Users could also provide additional comments describing themselves and what they wanted in a roommate. After the application is complete, Roommate creates a profile page that displays a user’s description and preferences. Users then have two options—use the free service to search out other profiles and receive periodic emails showing new housing opportunities, or pay a monthly fee to get the additional service of reading emails from other users and viewing other users’ additional comments.
The Fair Housing Councils of the San Fernando Valley and San Diego sued Roommate alleging it was in violation of the federal Fair Housing Act (FHA) (42 U.S.C. § 3604(c)) and California housing discrimination laws. The FHA prohibits several forms of discrimination based on race, color, religion, sex, familial status, or national origin. California law prohibits several forms of discrimination based on sexual orientation, marital status, ancestry, source of income, or disability. Plaintiffs argued several Roommate policies were against the law: (1) questions it poses to prospective users during the application process; (2) requiring users to answer questions to use its website; (3) its development and display of user preferences on a user’s profile page; (4) the operation of its search system or of its email notification system; and (5) users’ additional comments.
The court held that §230(c)(1) did not prohibit holding Roommate liable on the first four claims but did prohibit liability for a user’s additional comments. On the first claim, Roommate did not receive immunity because it came up with the questions. To receive §230(c)(1) immunity, the content must be provided by a third party. Moreover, one can be liable under both statutes for simply asking questions like Roommate. On the second claim, Roommate did not receive immunity because §230 “does not grant immunity for inducing third parties to express illegal preferences.” Roommate posed the questions and required users to answer them to use the site. That action was taken by Roommate, not information provided by a third party.
The third claim entailed every user’s profile page that contained their personal info (sex, sexual orientation, etc.), and what they were looking for in a roommate. This information came directly from Roommate’s registration process that must be completed. Users listing available housing and users seeking a place to live must provide their preferences with respect to sexual orientation, sex, and whether there are children present or whether the user would live with children and can choose only from Roommate’s pre-populated answers. Roommate used this information to keep users away from preferences they did not want, and it is featured on every profile page. Roommate did not receive immunity because it served as a developer in part of the questionnaire that every user must fill out. §230(c)(1) provides immunity when interactive computer services (online platforms) are treated as publishers or speakers over information provided by another information content provider. But an information content provider is defined as “any…entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet…” see §230(f)(3). Because Roommate created the programs that required users to fill out the questionnaire with pre-populated answers based on preferences as a condition of using its website and required each user to have a profile based on how they answered these questions, it became an information content provider. And §230(c)(1) does not protect information content providers, it protects only an interactive computer service. The fact that users also served as information content providers did not prohibit Roommate from being one.
The fourth policy—Roommate’s operation of its search system or its email notification system that directs emails to users according to the criteria filled out in the questionnaire–was also not entitled to immunity. This follows from the paragraph above because if Roommate did not get immunity for requiring users to answer discriminatory questions, it could not get immunity for taking a user’s preferences to limit their search criteria based on discriminatory preferences either in its search engine or by email. Unlike Google’s generic search engine, Roommate’s search engine (and email system) provided results to users based on their discriminatory preferences. Therefore, instead of serving as a passive conduit, Roommate’s role was one of a developer in that its systems materially contributed to the alleged unlawfulness of using unlawful criteria to limit the scope of a search.
The only policy Roommate was provided immunity for was its choice to allow users to write additional comments about themselves that were accessible by paying subscribers only. User comments are pure third-party content; and Roommate published the comments as written, provided no guidance as to what the comments should say, and passively displayed it for paying subscribers. Unlike the above policies, Roommate was not a developer because its comment box only encouraged users to provide something; it did not instruct what to provide or even require the user to provide anything.
The most prominent holding in this case—”any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230” has done serious damage across the country. The holding is as prominent as the similar holding in Zeran and has resulted in immunity for online platforms from any lawsuit seeking to hold them liable for any claim where the platform removed speech. Whether the claim is one of contract, civil rights, tort, etc. the lawsuit is barred if it is really about the platform’s decision to remove speech. As in Zeran, this interpretation of the statute renders §230(c)(2)(A) entirely superfluous. It also reads out the statutory language that calls for platforms to act voluntarily and in good faith when they restrict access to material. It expands “publisher” under §230(c)(1) beyond its text and beyond why Congress afforded online platforms these protections. The case uses other broad language that is also problematic. It notes that “this is an immunity statute we are expounding,” “[s]uch close cases…must be resolved in favor of immunity,” and “section 230 must be interpreted to protect websites not merely from ultimate liability, but from having to fight costly and protracted legal battles.”
Roommate also teaches that if platforms provide users neutral tools to post content instead of the discriminatory ones like Roommate, the platform will be afforded immunity because it will not be materially contributing to the alleged illegality of the user’s content. But the statute does not provide a material contribution test. Rather, the question is whether the platform “is responsible in whole or in part for the creation or development of information provided through the Internet…” Roommate was an extreme case because users had to fill out their discriminatory preferences to use the website. The entire website was premised on that illegality. But the line between contributing and materially contributing to alleged illegality is far from clear; and based on how courts have interpreted §230 broadly, it is far more likely platforms will be immune in most cases rather than liable. For example, if a platform instructs users to visit a defamatory post, but the platform didn’t necessarily create the post, the platform contributes to the reputational injury suffered by the victim. But does the platform materially contribute to the alleged illegality in that hypothetical? We don’t know. Does worsening the injury of the victim make the platform liable or must the platform itself engage in illegal conduct like saying something defamatory? We don’t know.
Cases dismissed based on Roommate’s first holding:
- Riggs v. MySpace (C.D. CA. 2009). Affirmed by the 9th Circuit.
- Holding: §230(c)(1) immunized MySpace from negligence and gross negligence claims because the claims sought to hold MySpace liable as a publisher for removing a user’s page. The form of the lawsuit (negligence) did not matter because the substance went after MySpace’s decision as a publisher to remove content.
- Sikhs for Justice v. Facebook (N.D. CA. 2015). Affirmed by the 9th Circuit.
- Holding: §230(c)(1) immunized Facebook from claims of illegal discrimination based on immutable characteristics under Title II Civil Rights Act of 1964 for taking down a user’s page because the claim was based on Facebook’s decision as a publisher to take down content. The form of the lawsuit (illegal discrimination) did not matter because the substance went after Facebook’s decision as a publisher to remove content.
- Fed. Agency of News LLC v. Facebook (N.D. CA. 2020).
- Holding: §230(c)(1) immunized Facebook from claims of breach of contract and breach of the implied covenant of good faith and fair dealing because the claims sought to hold Facebook liable as a publisher in removing a user’s page. The form of the lawsuit (breach of contract) did not matter because the substance went after Facebook’s decision as a publisher to remove content.
Case dismissed based on Roommate’s third holding:
- Jones v. Dirty World Entertainment Recordings (6th Cir. 2014).
- Holding: §230(c)(1) immunized Dirty World Entertainment Recordings from claims of defamation, libel per se, false light, and intentional infliction of emotional distress because its manager that commented on the allegedly tortious posts with his own observation did not materially contribute to the alleged illegality. Dirty World’s website provided a forum for users to post comments about anything, and its manager would comment on each post, making it clear it was him with a sign in boldface text. Unlike in Roommate, the site did not require users to post illegal content as a condition for use and the instructions given to users about what to post were neutral. Even though the manager knew of the problematic content of the user’s posts and joined in by making absurd comments (although not tortious) that drove users to the posts, that was not enough to materially contribute to the alleged illegality. This case illustrates the problem with the material contribution test. Roommate requires the platform to contribute materially to the alleged illegality. The manager here did not: although his comments drove users to the posts and thereby contributed to the victim’s reputational injury, his comments were not tortious so the platform could not be held liable. In other words, if the platform does not engage in tortious actions but uses its algorithm or employees to highlight tortious posts, it cannot be held liable. That result is incredibly problematic.
Force v. Facebook (2nd Cir. 2018). (Based in part on Roommate’s third holding).
- Holding: §230(c)(1) immunized Facebook from civil terrorism-related claims that alleged it provided material support for terrorists through its algorithm, which facilitated Hamas’ ability to reach and engage an audience it otherwise could not because “arranging and distributing third-party information inherently forms ‘connections’ and ‘matches’ among speakers, content, and viewers of content…in interactive internet forums…[t]hat is the essential result of publishing.”
Between 2014 and 2016, four Americans in Israel were killed, and one seriously injured, in attacks by Hamas. The families of the victims and the survivor claimed that Hamas posted content on Facebook that encouraged terrorist attacks in Israel during the time of the attacks. They also alleged that the terrorists viewed the content on Facebook, and it ranged in specificity from Hamas messages that advocated for the kidnapping of Israeli soldiers to a Hamas post encouraging car-ramming attacks at light rail stations. Hamas also celebrated these attacks on Facebook. While Facebook’s terms do not allow these posts, plaintiffs alleged Facebook failed to remove them, and its algorithm directed this content to personalized newsfeeds of individuals who harmed the plaintiffs. The victims’ estates and the surviving American sued Facebook alleging it was civilly liable for several terrorism-related claims—most importantly, providing material support to terrorists.
Plaintiffs argued that Facebook’s algorithm took it outside of §230(c)(1) protections because it was not acting as a publisher, as the algorithm suggested content to users that resulted in “matchmaking.” The argument followed that those associated with Hamas or terrorism generally were able to view this content and connect with other terrorists because of the algorithm. Therefore, Facebook allegedly provided material support to terrorists by making it easier for Hamas to reach and engage an audience that it could not otherwise reach as effectively or at all. Plaintiffs did not seek to hold Facebook liable as a publisher, but instead liable for its affirmative role in bringing terrorists together.
The court held that each claim was barred by §230(c)(1). It recited the holding from Zeran and several other §230 cases that hold §230(c)(1) favors immunity because of the broad definition courts have given the term “publisher.” It argued that Facebook acted as a publisher through its algorithm because “arranging and distributing third-party information inherently forms ‘connections’ and ‘matches’ among speakers, content, and viewers of content…in interactive internet forums…[t]hat is the essential result of publishing.” The court believed that Facebook’s immunity would be “eviscerated” if the use of its algorithm rendered it a non-publisher. While it recognized that “matchmaking” causes more matches than typical editorial decisions like what third-party content to put on a homepage, it did not find that as a basis to deny immunity. It also broadly read the congressional findings in §230 to strengthen its argument.
Plaintiffs also argued that Facebook should not receive §230(c)(1) protection because by use of its algorithm, it helped develop Hamas’ content by directing users who were interested in terrorist activities without the users seeking out the content. Here, plaintiffs argued that Facebook became an information content provider (see §230(f)(3)) over Hamas’ content because it helped develop it in part. The court rejected this argument under Roommate’s material contribution test because Facebook did not edit or suggest edits for a user’s content, and its algorithm was content-neutral in that it matched users based on objective factors applicable to anything. The fact that Facebook’s algorithm displayed content to other users even if they did not actively seek it out was not enough for Facebook to materially contribute to Hamas’ content.
The dissent disagreed with characterizing Facebook’s use of its algorithm as publishing the work of a third party. It recognized the “publisher’s traditional editorial functions” test from Zeran but argued that Facebook’s algorithm was different because it creates and communicates its own message, and its suggestions created “real-world social networks.” In other words, Facebook’s friends, groups, and events suggestions are all Facebook’s own product, not third-party speech. While the data comes from third parties, Facebook makes the recommendations. Facebook chose to allow its algorithm allegedly to connect users based on shared interests—and one of those interests was terrorism. Therefore, because the plaintiff’s claim was not based on the content of information from third parties but based on connections between individuals possible only because of Facebook’s algorithm, §230(c)(1) should not have prohibited the lawsuit. In addition, the lawsuit did not seek to hold Facebook liable as a publisher, but liable based on its affirmative role of bringing terrorists together.
Enigma Software Group USA v. Malwarebytes (9th Cir. 2019).
- Holding : §230(c)(2)(A) did not immunize Malwarebytes from its decision to label Enigma’s virus protection software as a threatening program that users could not download because the statute’s use of “otherwise objectionable” did not reach anticompetitive conduct. But the court also rejected the reading of “otherwise objectionable” under the statutory canon of ejusdem generis.
Enigma and Malwarebytes both sold computer security software across the world and were direct competitors. One function of computer security software is to help users identify and block threatening programs before they download them. Malwarebytes had a program that called the threatening software a Potentially Unwanted Program (PUP) and it included software that contained “obtrusive, misleading, or deceptive advertisements, branding or search practices.” Once Malwarebytes is installed on a computer, if a user tries to download a program that Malwarebytes determined might be a PUP, the user is alerted by a pop-up alert that instructs the user to stop the download. After eight years of competition, Malwarebytes’ software began flagging Enigma’s most popular software programs as PUP’s, but unlike other PUP’s, Malwarebytes did not let users download Enigma’s software. As a result, Enigma lost customers, revenue, and experienced harm to its reputation. Enigma sued under four claims, three regarding state law relevant here: (1) deceptive business practices under state law; (2) tortious interference with business; and (3) contractual relations under state law.
The court held that §230(c)(2)(A) did not prohibit the lawsuit because Malwarebytes anti-competitive animus did not fit under the statute’s use of “otherwise objectionable.” Malwarebytes took an extreme position that “otherwise objectionable” is a catchall, i.e., whatever reason the online platform thinks is objectionable qualifies as a reason to restrict access to third-party content. Relying on §230’s congressional purpose, the court did not think Congress meant “otherwise objectionable” to be defined to allow software developers to drive each other out of business. Rather, Congress sought to “maximize user control over what content they view” (§230(b)(3)), and if “otherwise objectionable” reached anticompetitive animus, online platforms could block content users might want to see because it might hurt the platform’s bottom line.
Unfortunately, the court also rejected Enigma’s reading of “otherwise objectionable” to reach only material that is sexual or violent in nature under the statutory canon of ejusdem generis. Under that canon, “when a generic term follows specific terms, the generic term should be construed to reference subjects akin to those with the specific enumeration.” It rejected this position because the categories under §230(c)(2)(A) vary greatly, “[m]aterial that is lewd or lascivious is not necessarily similar to material that is violent, or material that is harassing.” In other words, because the categories of words before “otherwise objectionable” covered wide topics, ejusdem generis could not be used. The court suggested “otherwise objectionable” was meant to cover unwanted online content that Congress could not identify when it drafted the statute.
While the case was correct to reject the boundless reading of “otherwise objectionable,” it was wrong to reject the reading under ejusdem generis. The court’s problem with the canon can be solved by comparing the reason the platform restricted access to the material with the adjectives listed before “otherwise objectionable.” While there will be close cases, many of them will not be. Its argument that “otherwise objectionable” should cover what Congress could not identify when it drafted the statute is without question a more unworkable test than using ejusdem generis. How exactly is a court supposed to determine what Congress could not identify in 1996? One thing it would force a court to do is look at the legislative history instead of relying on the text, which is what was actually enacted into law (i.e., it went through bicameralism and presentment).
Cases that have correctly interpreted §230:
Chicago Lawyers’ Committee For Civil Rights Under Law, Inc. v. Craigslist, Inc. (7th Cir. 2008).
- Holding: §230(c)(1) immunized Craigslist from liability under the Fair Housing Act for illegal advertisements by third parties because the suit sought to hold it liable as a publisher, but Craigslist did not author the ads.
Craigslist’s website featured a series of advertisements for goods and services including buying, selling, and renting homes. The Chicago Lawyers’ Committee for Civil Rights Under Law on behalf of its members sued Craigslist for violating the Fair Housing Act that in pertinent part makes it illegal “[t]o… print or publish… any… advertisement, with respect to the sale or rental of a dwelling that indicates preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin…” See 42 U.S.C. § 3604(a).
The court held that §230(c)(1) prohibited the lawsuit because the only way Craigslist could be liable under the Fair Housing Act was for its role in publishing illegal advertisements. In other words, the claim here was the exact type of lawsuit that §230(c)(1) was meant to prevent. While recognizing that other circuits have read §230 broadly (Zeran), it took a much narrower view. First, it noted that § 230(c)(1) does not mention “immunity” and doubted that §230(c)(1)-(2) could be understood as a “general prohibition of civil liability for web-site operators…” It buttressed this argument by observing that an interactive computer service can lose its §230 immunity when it in whole or in part creates or develops information provided through the internet. See §230(f)(3). But it also noted the broadness of “any information” in §230(c)(1) in that an interactive computer service cannot be treated as a publisher or speaker over “any information” provided by another information content provider.
Most importantly, the court re-upped its concerns with reading §230 broadly that it raised in a previous decision, Doe v. GTE Corp. In that case, the court sketched out an interpretation of the text in stark contrast with Zeran. Under one theory, courts should recognize that the title of section (c)–“Protection for `Good Samaritan’ blocking and screening of offensive material”– seems at odds with the way that the law as interpreted allows platforms to sit by and do nothing to address the offensive material on their sites. Recognizing that the title of the section and the binding law do not conflict, it suggested “an entity would remain a ‘provider or user’ — and thus be eligible for the immunity under §230(c)(2) — as long as the information came from someone else; but it would become a ‘publisher or speaker’ and lose the benefit of §230(c)(2) if it created the objectionable information….§230(c)(2) never requires ISPs to filter offensive content, and thus §230(e)(3) would not preempt state laws or common-law doctrines that induce or require ISPs to protect the interests of third parties…There is yet another possibility: perhaps §230(c)(1) forecloses any liability that depends on deeming the ISP a ‘publisher’ — defamation law would be a good example of such liability — while permitting the states to regulate ISPs in their capacity as intermediaries.” This narrower reading of the statute interprets the legally-binding text as written, as opposed to an interpretation based on the broad congressional purpose or findings to widen the scope of the text.
Barnes v. Yahoo! (9th Cir. 2009).
- Holdings: §230(c)(1) immunized Yahoo from liability for a claim of negligent undertaking after it promised to remove a series of fake profiles and did not do so, but not from a claim of breach of contract because its promise to remove content created a legal duty distinct from removing the content.
- “[W]hat matters is not the name of the cause of action…what matters is whether the cause of action inherently requires the court to treat the defendant as the ‘publisher or speaker’ of content provided by another.”
After Cecila Barnes and her boyfriend ended their relationship, he responded by posting profiles of her on a website run by Yahoo! (Yahoo). On the site, users can post information such as their age, pictures, location, hobbies, etc., and it can be viewed by users all over the world. The fake profiles contained nude photos of Barnes and her boyfriend taken without her knowledge, an open invitation to have sex with her, and the real address and phone number of her place of employment. Men began emailing, calling, and visiting Barnes with the expectation of sex. Barnes went through the Yahoo processes of getting the profiles removed, but after a month, Yahoo had not responded. Barnes contacted Yahoo several more times without response. After a local news program prepared to do a story on the incident, Yahoo broke its silence and its Director of Communications (Director) told Barnes she would take the statement to the division responsible for getting profiles removed and they would “take care of it.” But two more months went by, and the profiles were still up. Barnes sued Yahoo and the profiles were removed.
Barnes brought two claims against Yahoo both based on its Director’s promise to take Barnes’ case over to the department responsible for getting the profiles removed. The first claim, negligent undertaking (found in the Restatement (Second) of Torts § 323), was based on Yahoo’s failure to exercise reasonable care after its agent (the Director) promised to act, but failed to do so reasonably because the profiles were left up for months after the fact. The second claim was breach of contract based on Barnes’ reliance on the Director’s promise to take her situation to the division responsible for getting the profiles removed. After the Director’s promise, Barnes took no further action to ensure the profiles were taken down. In other words, Barnes relied on the Director’s promise.
The court held that the negligent undertaking claim was barred by §230(c)(1) while the breach of contract claim was not. First, it recited the broad §230(c)(1) caselaw that “what matters is not the name of the cause of action…[but] whether the cause of action inherently requires the court to treat the defendant as the ‘publisher or speaker’ of content provided by another.” It defined “publisher” broadly to include “reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content… [and] [a]ny activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.”
While Barnes’ first claim sought to hold Yahoo liable for negligently undertaking to perform a service, the claim was barred because what Yahoo failed to do was remove the profiles from its website. That decision went after Yahoo as a publisher for failing to remove content. Barnes’ second claim could proceed, however, because it did not hold Yahoo liable as a publisher but based liability on its promise to do something. Here, the court distinguished tort from contract law because although the tort claim went after Yahoo’s failure to remove content, the contract claim went after Yahoo’s failure to live up to its promise to remove content. Unlike tort law, contract law “treats the outwardly manifested intention to create an expectation on the part of another as a legally significant event.” Yahoo’s promise created a legal duty that was different from the action of removing content. The court suggested another way of looking at it was that while §230(c)(1) removes the ability for online platforms to be treated as speakers or publishers, Yahoo’s promise altered that rule and made the promise enforceable.
Based on the facts of the case, one might surmise that the court decided to make a Solomonic decision to allow one claim to proceed, but not the other. Nevertheless, under the court’s interpretation of the law, it would seem like both cases should be prohibited. At its core, the contract claim was about a promise to take down content. So why would the form of the lawsuit suddenly matter when the court instructed previously it shouldn’t? A defensible result would have been for the tort claim to proceed as well under the theory that negligent undertaking would not treat Yahoo as a speaker or publisher. Rather, the claim went after Yahoo’s failure to exercise reasonable care after it promised to undertake an action on behalf of Barnes. In other words, the tort claim went after Yahoo’s conduct, not over the content of the fake profiles. Even if that argument wouldn’t have been successful, because the claim was about taking down content, perhaps Barnes should have argued Yahoo failed to act in good faith under §230(c)(2)(A).
Jane Doe No. 14 v. Internet Brands, Inc., DBA Modelmayhem.com (9th Cir. 2016).
- Holding: §230(c)(1) did not immunize Internet Brands from a claim of negligent failure to warn based on its knowledge that two men were using its website for a criminal scheme because instead of holding it liable as a publisher, the claim sought to hold it liable for its failure to warn based on information it obtained from an outside source. The affirmative duty imposed on Internet Brands “would not require [it] to remove any user content or otherwise affect how it publishes or monitors such content.”
Two men used the website Model Mayhem (Mayhem) to find targets for their rape scheme. While they did not create content on the site, they browsed profiles created by models, contacted them with fake identities posing as talent scouts, and lured them to south Florida for a fake modeling audition. When the victim arrived, they “used a date rape drug to put her in a semi-catatonic state, raped her, and recorded the activity on videotape for sale and distribution as pornography.” In 2008, Internet Brands purchased Mayhem and later learned the two men were using the website, “had been criminally charged in the scheme, and further knew from the criminal charges, the particular details of the scheme…” Several months after Internet Brands learned of the criminal activity, one of the men pretended to be a talent scout and used a fake identity to contact the plaintiff through Mayhem’s website. She went to Florida for a purported audition, and the men drugged, raped, and recorded her. She sued Internet Brands “asserting one count of negligent failure to warn under California law,” arguing Internet Brands knew of the criminal’s activities but failed to warn Mayhem users that they were at risk of becoming victims.
The court held that §230(c)(1) did not prohibit the plaintiff’s claim because instead of holding it liable as a publisher, the claim sought to hold it liable based on its affirmative duty under California law to warn her based on information it obtained from an outside source. Here, the plaintiff argued that Internet Brands had “a duty to warn a potential victim of third-party harm when a person has a ‘special relationship to either the person whose conduct needs to be controlled or . . . to the foreseeable victim of that conduct.’” She alleged Internet Brands “had a cognizable ‘special relationship’ with her and that its failure to warn her of Flanders and Callum’s rape scheme caused her to fall victim to it.” The court noted that the claim could not treat Internet Brands as a publisher because the alleged duty did not require it to change how it publishes or monitors content. Internet Brands could have warned users by posting a notice on its website or informing users by email and complied with the duty. Such action would not involve §230 as Internet Brands would be providing the statement; it would not be based on information provided by a third party.
e-ventures Worldwide, LLC v. Google, Inc. (M.D. FL. 2016).
- Holding: §230(c)(2)(A) did not immunize Google from its decision to remove several hundred of the plaintiff’s websites from its search page because the plaintiff alleged Google acted in bad faith based on its removal being inconsistent with Google’s removal policies, Google’s public statements about its removal policies, and removal of the pages for anti-competitive reasons.
E-ventures operated an online publishing and research firm and most of its revenue came from the search engine optimization industry. The goal of that industry is to help websites get displayed more prominently in the results of a search engine without paying the search engine. Google makes most of its money from “AdWords,” an advertising program that requires website operators to pay to be ranked and displayed prominently on Google’s search results. In other words, if e-ventures is successful in its work, Google’s revenue goes down. E-ventures learned that a third party told Google false information about it; and as a result, Google removed all 231 e-ventures websites from its search results. Google identified every website as “[p]ure spam,” which as defined by Google means the website is using techniques that are not allowed by its webmaster guidelines. The 231 websites were all manually removed, including e-ventures’ corporate website that did not engage in any activities that could possibly be identified as spam. Ultimately, Google removed 365 of e-ventures’ websites. E-ventures attempted to get new websites to be listed in a Google search, but because of the website’s affiliation, Google rejected them. E-ventures sued Google on four claims, but three are relevant here: (1) violation of the Florida Deceptive and Unfair Trade Practices Act; (2) defamation; and (3) tortious interference with business relationships.
The court held that §230(c)(2)(A) did not prohibit the lawsuit because e-ventures argued that Google did not act in good faith when it removed the websites from its search results. While many courts would have dismissed the lawsuit under §230(c)(1) because according to Zeran, the decision to take down content is an “exercise of a publisher’s traditional editorial functions,” the court here did not read the statute as broadly. Instead, the court recognized that the text of §230(c)(2)(A) deals with a decision to take down content, not §230(c)(1). And under §230(c)(2)(A), if the platform does not act in good faith, it would not be entitled to immunity. Here, e-ventures argued that Google did not act in good faith because “the removal of its websites was inconsistent with statements published by Google in its ‘Removal Policies,’ both in terms of what the Policy says and what it fails to say.” Moreover, e-ventures also claimed that Google’s public statements about its removal policies were false, inconsistent with how it treated e-ventures’ websites, and motivated by anti-competitive reasons.
- Other courts have also refused to read “otherwise objectionable” in §230(c)(2)(A) to mean whatever the internet platform wanted.
- In Song Fi Inc. v. Google (N.D. Cal. 2015), YouTube removed and relocated the plaintiff’s music video and the plaintiff sued on several claims. YouTube argued that “otherwise objectionable” was entirely subjective and allowed YouTube to restrict access to material for any reason it wanted. The court firmly rejected that reading, instead holding “the ordinary meaning of ‘otherwise objectionable,’ as well as the context, history, and purpose of the Communications Decency Act all counsel against reading ‘otherwise objectionable’ to mean anything to which a content provider objects regardless of why it is objectionable.” See also Sherman v. Yahoo! Inc. (S.D. CA. 2014) holding the same.
Malwarebytes, Inc. v. Enigma Software Group USA, LLC (2020)—Statement of Justice Thomas respecting the denial of certiorari.
As the Supreme Court has never taken a §230 case, Justice Thomas is the only justice that has gone on record with how he would interpret the statute. In this statement, he laid out the problems with the current interpretations of §230 (largely based directly from the cases above) and possible ways to correct course.
First, Justice Thomas laid out a plain interpretation of the statute—”the statute suggests that if a company unknowingly leaves up illegal third-party content, it is protected from publisher liability by §230(c)(1); and if it takes down certain third-party content in good faith, it is protected by §230(c)(2)(A).” He next took issue with the 4th Circuit’s holding in Zeran that held that §230(c)(1) protects platforms from distributor liability even though the text protects platforms only from being treated as a speaker or publisher. He noted that in a different section of the Communications Decency Act, Congress imposed distributor liability that is enforceable by civil remedy; and Congress passed §230 to overturn Stratton Oakmont’s publisher theory of liability which held that platforms that constantly moderate third-party content should be treated like publishers, not distributors. Congress did not pass §230 to overturn Cubby’s theory of distributor liability.Perhaps his strongest textual argument was that if Congress wanted to eliminate all liability for third-party speech, §230(c)(1) could have just read, “No provider ‘shall be held liable’ for information provided by a third party.” In fact, §230(c)(2)(A) uses this exact language.
Next, Justice Thomas took issue with several appellate courts’ interpretation of §230(c)(1) in that it provides immunity for the “exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.” But this interpretation is at odds with the text that protects from liability only for content provided by another information content provider. Online platforms can in fact lose protection if they in whole or in part create or develop content such as editing or adding commentary to third party statements. This broad interpretation of § 230(c)(1) also reads out of the statute §230(c)(2)(A), which encourages platforms to create content guidelines. Read together, “both provisions in §230(c) most naturally read to protect companies when they unknowingly decline to exercise editorial functions to edit or remove third-party content, §230(c)(1), and when they decide to exercise those editorial functions in good faith, §230(c)(2)(A).”
Finally, Justice Thomas raised concerns with interpretations of §230(c)(1) that protected platforms from product-defect claims such as in Force v. Facebook (see above) where the 2nd Circuit held Facebook was immune when a lawsuit sought to hold it liable for connecting terrorists through its algorithm. After citing several other examples, he stated that “[a] common thread through all these cases is that the plaintiffs were not necessarily trying to hold the defendants liable ‘as the publisher or speaker’ of third party content. §230(c)(1). Nor did their claims seek to hold defendants liable for removing content in good faith. §230(c)(2). Their claims rested instead on alleged product design flaws—that is, the defendant’s own misconduct.” In other words, Justice Thomas suggested that §230 does not protect platforms from their own actions, only for the content of another when the claim seeks to treat them as a publisher or speaker. While that interpretation might not sound like a large change, if adopted by courts, many lawsuits would proceed past the motion to dismiss stage and onto the merits.
Carly Lemmon v. SNAP, Inc., (9th Cir. 2021). (Issued after Justice Thomas’ statement above)
- Holding: § 230(c)(1) did not immunize Snapchat from liability for negligent design of its own product because the lawsuit did not seek to hold it liable as a publisher but liable based on a products liability tort that entailed a “duty to exercise due care in supplying products that do not present an unreasonable risk of injury or harm to the public.”
Snapchat allowed users, among other things, to take photos (snaps) and immediately share them with friends. One feature allowed for a speed filter that captured how fast the user is going when the photo is taken whether it be in a car, plane, train, etc. Snapchat also rewarded users with “trophies, streaks, and social recognitions” based on what they send, and users believe that Snapchat will reward them for 100mph+ speeds. In this case, three men (including the driver) in a car were killed because the driver went 113mph, crashed into a tree, and the car burst into flames. Earlier in the drive, the car was going as fast as 123mph. Before the crash, one of the passengers opened his Snapchat app to document how fast they were going. The parents of the dead men sued Snapchat in a negligent design lawsuit, alleging it knew or should have known its users believed a reward system existed; and its speed filter was incentivizing young drivers to drive at dangerous speeds.
The court held the negligent design claim was not barred by §230(c)(1) because the lawsuit did not treat Snapchat as a publisher, but liable under a product liability tort for negligent design of its own product. It defined publication as an action that “generally involves reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content.” It used a defamation claim as an example. In stark contrast, the lawsuit here sought to hold Snapchat liable because it “created: (1) Snapchat; (2) Snapchat’s Speed Filter; and (3) an incentive system within Snapchat that encouraged its users to pursue certain unknown achievements and rewards.” Unlike being treated as a publisher, a negligent design lawsuit is a products liability suit that alleges manufacturers “have a ‘duty to exercise due care in supplying products that do not present an unreasonable risk of injury or harm to the public.’” This duty is markedly different from the duties of a publisher that entail reviewing and editing material submitted for publication. By treating Snapchat as a products manufacturer that negligently designed its product, the lawsuit did not treat it as a publisher.
But §230(c)(1) immunity was also barred because the negligent design claim did not entail “information provided by another information content provider.” Rather, the basis of the lawsuit was Snapchat’s own product because its speed filter and reward system worked together to encourage dangerous speeds. The lawsuit was not about Snapchat’s decision to publish the photos of how fast the men were going before the crash.
Jane Doe v. Facebook (2022)—Statement of Justice Thomas respecting the denial of certiorari.
Justice Thomas for a second time highlighted the need for the Supreme Court to “address the proper scope of immunity under §230 in an appropriate case.” Here, Jane Doe, a 15-year-old girl, alleged that a male sexual predator used Facebook to lure her into meeting him. He then proceeded to rape, beat, and sex-traffic her. After Doe escaped, she sued Facebook alleging it committed various common-law offenses, among other claims. The Texas Supreme Court declined Doe’s writ of mandamus and held that §230(c)(1) barred her suit because other courts have “treated internet platforms as ‘publisher[s]’ under [the statute], and thus immune, whenever a plaintiff’s claim ‘stem[s] from [the platform’s] publication of information created by third parties.”
Justice Thomas recognized that the issue with this interpretation was that it “requires dismissal of claims against internet companies for failing to warn consumers of product defects or failing to take reasonable steps ‘to protect their users form the malicious or objectionable activity of others.’” Facebook was given publisher immunity even though the plaintiff alleged it knew its system facilitated human traffickers in identifying victims, but failed to correct course because if it did, it would lose advertising revenue those users generate. Importantly, he also took issue with the argument that §230(c)(1) should protect Facebook from liability for its own acts or omissions.