The Current and Future Status of Section 230

Abstract

            The internet is a place that has benefited free speech, but it is also a place that has harbored many harms. Online service providers have had the opportunity to grow and flourish the economy, bringing with it a user base with unlimited access to information and social interaction. Among the harms, however, include defamation, harassment, online sex trafficking, election interference and misinformation, and terrorist activity. Section 230 has made it possible for both the good and the bad of the internet to grow. Without Section 230, online service providers were actively punished for moderating inappropriate content.

The question then becomes, what is the best way to adjust Section 230 to protect the good parts of the internet and encourage online service providers to filter out the bad? Each proposal has merit, and each proposal has flaws. The result is that Section 230 is unlikely to be amended, and the courts will continue to slowly carve out narrow exceptions for online service providers who develop or are responsible for publishing offensive content.

I. Introduction

            On December 19, 2020, President Trump tweeted “Peter Navarro releases 36-page report alleging election fraud ‘more than sufficient’ to swing victory to Trump . . . . A great report by Peter. Statistically impossible to have lost the 2020 Election. Big protest in D.C. on January 6th. Be there, will be wild!”[1] This tweet, along with others became the center of congressional hearings discussing the former President’s culpability after supporters of the President stormed the capital on January 6, 2021.[2] On January 6th, President Trump published 25 tweets. Many of the tweets addressed the President’s belief that voter fraud had occurred. After the capital riot had begun, the last few tweets discussed the need for peaceful protest.[3]

            President Trump continued to tweet. “The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!” He continued: “To all of those who have asked, I will not be going to the Inauguration on January 20th.” Based on these two tweets, Twitter permanently suspended @realDonaldTrump, the handle for President Trump’s personal twitter account for violating the Glorification of Violence Policy.[4] Facebook and Instagram soon followed suit.

            The reaction from both the political left and right was instantaneous. Those who opposed President Trump applauded Twitter for taking action against the President but were disappointed it took January 6th to make the decision.[5] Trump and his supporters lamented how Twitter was “banning free speech.”[6] Eventually, Trump started a social media site called Truth Social to encourage free speech.[7] Later, Elon Musk—concerned with Twitter’s policies leading to censorship—purchased Twitter, rebranded to X, and lifted the suspension on @realDonaldTrump.[8] His goal was also to encourage free speech.[9]

            Social media and other online platforms have not only permitted questionable speech from controversial figures but have been a place where crime and other issues could grow. Several individual reputations have been ruined by false claims, businesses have suffered with negative (and perhaps false) reviews, women and children have been lured and sexually abused by platform users, bot accounts have been created to interfere with elections, and terrorists have used platforms to recruit and publish propaganda.[10]

Unfortunately, tracing and suing the original poster of defamatory or illegal content is impractical, and in some cases, impossible. Instead, some plaintiffs may wish to sue the online service provider—after all, they made it possible for the material to be public. However, Section 230 of the Communications Decency Act limits the liability of online service providers.[11]

            The tension between online free speech and the harms it might bring on society has brought Section 230 under great scrutiny. 47 U.S.C. § 230 is only about 900 words in length, but it provides vast protection to online service providers. Subsection (c)(1) states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[12] The Act goes on to state that these service providers will receive no liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”[13] In other words, online service providers are not liable for words posted by users. Further, these providers are not liable if they take measures to remove objectionable material.[14] Section 230 thus provides online services with more protection than the First Amendment can provide.

            Because there are several sympathetic plaintiffs who ultimately receive little to no remedy for their losses, several critics have said the Section 230 immunization provides too much protection for online service providers.[15] On the other side, proponents of Section 230 state the importance of maintaining free speech outweighs the social costs associated with that maintenance.[16] This article seeks to address the strengths and weaknesses of Section 230 and potential changes to the Section. However, to address the pros and cons of Section 230, some background information about the First Amendment and Section 230 is required. Thus, Part II will provide an overview of the First Amendment and the protections it provided to online service providers before the enactment of Section 230.[17] Part III will discuss the policy goals behind Section 230, the growth of the internet under the Section, and eventual erosion of the Section.[18]Part IV will discuss the strengths and weaknesses of proposals seeking to repeal or adjust Section 230 and provide a conclusion as to the approach best suited to protect free speech and prevent social harms.[19]

II. Overview of First Amendment Protections To Online Services

            The internet was still very new to people in the late 1980s and early 1990s. Section 230 was first signed into law as part of the Telecommunications Act in 1996.[20] This provided only a few years for courts to develop common law and determine how the First Amendment would apply to the internet before Section 230 provided more protections.

            The First Amendment states that “Congress shall make no law . . . abridging the freedom of speech, or of the press.”[21] However, the courts have never held that the First Amendment protections of free speech or of the press were absolute.[22] The same principle remained true for the early free speech cases dealing with online service providers.

            Early internet users typically connected to one of three main online service providers: CompuServe, Prodigy, and America Online. Each provider was the subject of lawsuits for defamation before Section 230 was law.[23] Ultimately, two cases shaped how the First Amendment applied to internet service providers before Section 230 was enacted: (1) Cubby v. CompuServe and (2) Stratton Oakmont v. Prodigy Services Co. To understand the rulings in Cubby and Stratton Oakmont, some context is required to discuss how the First Amendment applies to cases where an internet service provider is not at issue. The primary case relied on by both Cubby and Stratton Oakmont is Smith v. California.[24]

A. Smith v. California

            Mr. Eleazar Smith owned a small bookstore in Los Angeles, California.[25] Of the many books he sold, one was Sweeter Than Life, a book containing obscene material.[26] A California officer purchased the book, discovered the material, then arrested Mr. Smith for violating a city ordinance.[27] The ordinance in question made it unlawful “for any person to have in his possession any obscene or indecent writing, or book . . . in any place of business where . . . books . . . are sold or kept for sale.”[28] This ordinance had no element of scienter, and thus it was interpreted as imposing strict criminal liability.[29] That is, because the ordinance did not state the level of knowledge the possessor must have that the book contained obscene material, any possession—regardless of knowledge—would result in criminal liability.[30]

            Mr. Smith alleged that he had never read any of the books or magazines sold at his bookstore that contains the obscene material.[31] Simply put, he stated that he didn’t have time to read every book within his bookstore to see if it contains obscene material. “I have certain duties to perform, and I haven’t got time to read.”[32] At trial, and on appeal, the judges rejected Mr. Smith’s argument. After determining that Sweeter Than Life was obscene, the judges reasoned that a bookseller cannot be ignorant of what is sold within the bookstore.[33]

            On appeal to the U.S. Supreme Court, Mr. Smith argued the city ordinance was a violation of the First Amendment.[34] Writing for the majority, Justice Brennan agreed.[35] Throughout his opinion, Justice Brennen focused on the ordinance’s chilling effect on speech.[36] Although the First Amendment allows ordinances to narrowly restrict obscenity, ordinances are not allowed to restrict non-obscene books.[37] Justice Brennan argued a restriction without an element of scienter was too broad.[38]

The ordinance here in question, to be sure, only imposes criminal sanctions on a bookseller if in fact there is to be found in his shop an obscene book. But our holding . . . does not recognize any state power to restrict the dissemination of books which are not obscene; and we think this ordinance’s strict liability feature would tend seriously to have that effect, by penalizing booksellers, even though they had not the slightest notice of the character of the books they sold.[39]

Consequently, Brennan concluded that the ordinance would discourage stores form selling any book or magazine the store had not personally reviewed. He went on, “And the bookseller’s burden would become the public’s burden, for by restricting him the public’s access to reading matter would be restricted. If the contents of bookshops and periodical stands were restricted to material of which their proprietors had made an inspection, they might be depleted indeed.”[40]Ultimately, this meant that the ordinance not only limited obscene material but made it materially difficult for a shop to maintain non-obscene material, for the shopkeeper’s burden would be too great.[41] “Through it, the distribution of all books, both obscene and not obscene would be impeded.”[42]

            The concurrences in Smith made it clear that the ordinance was still free to target obscene material.[43]Additionally, if there was evidence that Mr. Smith had knowledge of the obscene material, the ordinance would have had effect.[44] Although the court did not go further to outline what type of scienter requirement would be constitutional, the ruling was clear that some requirement would be permissible.[45]

            Smith is important because the ruling provides the first introduction of how the First Amendment would apply to internet service providers. This makes sense, because in structure, an internet service provider and bookstores have similarities. A bookstore purchases material from an author or publisher then distributes that material. Similarly, third-party users post content to online service providers. The result of Smith is that the court is not willing to punish a bookstore for simply distributing material where the store had no knowledge the material was illegal.[46] Jeff Kosseff in The Twenty-Six Words That Created the Internet concisely defines the Smith rule as: “The First Amendment prohibits content distributors—such as bookstores—from being held legally responsible for the content they distribute unless a prosecutor or plaintiff makes some concreate demonstration about their scienter.”[47] As such, Smith creates a distinction between a distributor and a publisher. This distinction became very important for Cubby and Stratton Oakmont.

B. Cubby v. CompuServe

            Cubby v. CompuServe is the first case where a plaintiff attempted to hold an online service provider accountable for third-party content that appeared on the provider’s website.[48] Subscribers would pay either per minute or by month to use the provider’s services.[49] Once a subscriber, the user would have access to the provider’s database of information.[50] Users could also read bulletins to stay up to date on the latest news within particular industries.[51]

            Mr. Bob Blanchard, who had previously been a television journalist, founded Cubby, Inc. to provide products and services to the computer industry.[52] One such product was Skuttlebut, a newsletter discussing the broadcast industry.[53] Because Skuttlebut was a new newsletter, it was not yet associated with any online service providers, such as CompuServe.[54] However, Mr. Blanchard was a subscriber of CompuServe and could obtain information related to the broadcast industry by viewing bulletins related to the industry.[55] Rumorville, a newsletter ran by Mr. Don Fitzpatrick, also competed in the broadcast industry and was well established on CompuServe.[56] Rumorville began posting defamatory content against Mr. Blanchard and Skuttlebut on CompuServe’s bulletins.[57] Angry, Cubby, Inc. sued Mr. Fitzpatrick—as the publisher of Rumorville—and CompuServe for distributing the defamatory material.[58]

            CompuServe then filed a motion for summary judgment.[59] In its motion, CompuServe did not argue that the statements were defamatory.[60] Instead, CompuServe claimed it was a mere distributor, rather than a publisher of Rumorville’s newsletter.[61] Further, CompuServe argued that because it is only a distributor, “it cannot be held liable on the libel claim because it neither knew nor had reason to know of the allegedly defamatory statements.”[62] In support of this argument, CompuServe pointed to the lack of editorial control it had over Rumorville’s presence on CompuServe.[63]Up to this point, CompuServe had taken a “hands off” approach to regulating material in the database.[64] Whenever Rumorville would upload a periodical on CompuServe’s database, it became instantly available to readers, without CompuServe’s review.[65] Further, CompuServe has no contract with and does not compensate Rumorville.[66]Additionally, CompuServe does not receive compensation from Rumorville for hosting the periodical, other than the time users spend logged into CompuServe’s services.[67] Finally, CompuServe noted that it had not received any complaints regarding Rumorville until the initiation of the Cubby lawsuit.[68]

            Cubby, on the other hand, argued CompuServe was a publisher because “the information electronically published and distributed by CompuServe was distributed to subscribers by CompuServe’s own service and database. CompuServe thus Controlled the method of distribution and the means for the defamatory material to reach the customer.”[69]Interestingly, Cubby then argued that the lack of editorial monitoring to prevent defamatory comments would result in liability.[70]

            The district court agreed with CompuServe’s statement that it was a mere distributor.[71] First, the court stated the general rule that one who repeats defamatory material is subject to liability.[72] However, news vendors, bookstores, and libraries are not liable if they did not know or had no reason to know of the defamation.[73] Citing the rule in Smith, the court determined that “CompuServe’s CIS product is in essence an electronic, for-profit library that carries a vast number of publications and collects usage and membership fees from its subscribers in return for access to the publications.”[74]The court then went on to cite how CompuServe exercises no editorial control over the publications it permits and to do so would not be feasible. “Obviously, the national distributor of hundreds of periodicals has no duty to monitor each issue of every periodical it distributes.”[75] As such, CompuServe was deemed to be a distributor, rather than a publisher, ofRumorville.[76]

            Having established that CompuServe was a distributor, the court’s next step was to determine whether CompuServe was liable as a distributor.[77] A distributor may be liable if they had knowledge, or should have had knowledge, of the defamatory material.[78] Because the speed of uploading the periodical online with no review, the burden shifts to the plaintiffs to show there was knowledge.[79] Here, Cubby could not show that CompuServe had knowledge of the defamatory material, so the motion for summary judgment was granted.[80]

C. Stratton Oakmont v. Prodigy Services Co.

            Prodigy Services Co. set itself apart from its competitors as the “family friendly” version of online service providers.[81] In essence, the services provided by CompuServe and Prodigy were the same. Prodigy, however, took the extra step to actively monitor material appearing on the website. For instance, prodigy warned users that “it will not carry messages that are ‘obscene, profane, or otherwise offensive.’”[82] To encourage this content moderation, Prodigy utilized a software that screened for offensive language and utilized Board Leaders to remove material with “bad taste.”[83]

            Mr. Daniel Porush was the president of Stratton Oakmont, Inc., a securities investment banking firm.[84] Stratton Oakmont underwrote the initial public offering (IPO) of Solomon-Page Group Ltd., an employee recruitment company.[85] Later the same day, Stratton Oakmont and Solomon-Page announced that Solomon-Page was losing its largest client.[86] The announcement sparked an unidentified user to access Prodigy and state that (1) Stratton Oakmont committed criminal acts and fraud relating to the IPO, (2) Mr. Porush was “soon to be proven criminal”, and (3) Stratton Oakmont was a “cult of brokers who either lie for a living or get fired.”[87] Because the user was not identifiable, Stratton Oakmont and Mr. Porush initiated a defamation lawsuit against Prodigy.[88]

            Arguing that Prodigy was a publisher, Stratton Oakmont pointed to several statements made by Prodigy that it was a family orientated computer network and would take measures to regulate the material appearing on the network.[89]Additionally, Stratton Oakmont focused on how Prodigy promulgated content guidelines, encouraging users to refrain from posing insulting material; how Prodigy utilized the software screening program to automatically remove offensive language; and the use of Board Leaders to remove content perceived as offensive.[90]

            On the other hand, Prodigy relied on Cubby, stating the burden of regulating more than 60,000 posts to the network on a daily basis.

            The court begins by noting the liability distinctions for distributors and publishers. As mentioned, a distributor is only liable if they knew or had reason to know of the offensive material.[91] A publisher is liable for republishing libel “as if he had originally published it.”[92]

            Next, the court held that prodigy was a publisher.[93] Pointing to the automatic screening system and use of guidelines for Board Leaders to remove distasteful posts, the court distinguishes Prodigy from CompuServe.[94] “Prodigy has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards. Based on the foregoing, this Court is compelled to conclude . . . Prodigy is a publisher rather than a distributor.”[95] The court went on to say that Prodigy sought to have a chilling effect on speech, but not “the legal liability that attaches to such censorship.”[96]

D. Takeaways from Cubby and Stratton Oakmont

            The rulings in Cubby and Stratton Oakmont provide a few key takeaways. First, a court looks to whether the online service provider engages in any editorial moderation of content to determine whether the provider is a publisher or distributor.[97] Second, online service providers are fully liable for illegal or offensive content if they are a publisher.[98]Third, if the online service provider is a distributor, they may be liable for illegal or offensive content only if they knew or had reason to know of the illegal or offensive material.[99] Thus, if an online service provider is a distributor, they only take on the liability if they have notice of offensive material and make no efforts to remedy those errors.[100] Because the liability is more limited for distributors, online service providers will seek to be classified as a distributor. Consequently, the rulings in Cubby and Stratton Oakmont encourages online service providers to turn a blind eye to the content on their websites and refrain from removing offensive material until it is brought to the attention of the provider.[101]

III. Overview of Section 230 Protections

A. Policy Goals of Section 230

            Stratton Oakmont led Congress to pass Section 230 of the Communications Decency Act.[102] Because the internet was so new, party lines had not yet formed an opinion on the internet and the effort to pass the Section went relatively unnoticed.[103] The Sponsors of the bill were Chris Cox and Ron Wyden.[104] The two men were intrigued by the positive social and economic impact the internet may have. As such, they wanted to create a law that would allow the internet to grow and flourish, unimpeded by lawsuits. Both individuals also wanted the internet to be a safe space for families.[105] Seeing the result in Stratton Oakmont—an online service provider being punished for seeking to engage in good faith content moderation—Mr. Cox and Wyden wanted to draft the bill to encourage online service providers to engage in good faith content moderation without facing liability as a publisher.[106]

            These policy goals are illustrated three times in 47 U.S.C. § 230.

First is in the praise of the internet found in Subsection (a). For instance, some of the congressional findings include statements such as: “The rapidly developing array of Internet . . . represent an extraordinary advance in the availability of educational and informational resources to our citizens.”[107]

Second, Congress took the time to set forth five policy purposes of Section 230: (1) “to promote the continued development of the Internet,” (2) to encourage the free market on the internet, (3) to encourage software development of content moderation, (4) to remove the incentives against developing technology limiting “access to objectionable or inappropriate online material,” and (5) to encourage enforcement of criminal laws against obscenity, stalking, and harassment.[108]

Third, the policies listed in Section 230 are exemplified in the heading of Subsection (c): “Protection for ‘Good Samaritan’ blocking and screening of offensive material.”[109] Thus, Congress did not want online service providers like Prodigy to be punished for taking initiative in screening offensive material. As such, Section 230 statutorily overruled Stratton Oakmont by saying an online service provider is not to be treated as the publisher, or even distributor, simply because they engaged in content moderation.[110]

B. Growth of the Internet—Zeran v. American Online

            Shortly after Congress passed Section 230, the courts were asked to determine how far protections against liability extended. The first—and probably most important—case to present the question was Zeran v. American Online.[111] Just like CompuServe and Prodigy, American Online (AOL) was an online service provider were subscribers could communicate with one another over email and post messages to bulletins for others to view.[112] Over the course of several days, one bulletin maintained by AOL received several postings by an unidentified person.[113] The postings advertised “Naughty Oklahoma T-Shirts,” bumper stickers, and other merchandise reflecting offensive slogans related to a bombing that occurred in Oklahoma City.[114] The postings listed a phone number, told readers to “ask for Ken,” and to keep calling due to high demand.[115] Mr. Kenneth Zeran was a Seattle resident operating a business out of his home.[116] He first learned about the postings on AOL when he received several phone calls leaving angry, derogatory messages, and death threats.[117] Mr. Zeran contacted AOL several times requesting that the company remove the postings from the bulletin.[118] Although AOL agreed to remove the postings, the removal never occurred and more postings appeared.[119] Mr. Zeran continued to receive phone calls, disrupting his business, his ability to sleep, and ability to leave his home safely.[120] Consequently, Mr. Zeran filed a lawsuit against AOL for negligence in unreasonably delaying to remove the defamatory messages.[121] In response, AOL claimed Section 230 provided immunity.[122]

            The Court of Appeals for the Fourth Circuit agreed with AOL that Section 230 provided immunity.[123] After discussing the policy reasons for Section 230, the court turned to Zeran’s arguments against Section 230 applicability.

Zeran’s first argument was that Section 230 only eliminated publisher liability. Thus, distributor liability could remain intact.[124] However, the court stated that a distributor is merely a subset of a publisher.[125] Here, AOL is legally considered a publisher because “every one who takes part in the publication . . . is charged with publication.”[126]That is, “even distributors are considered to be publishers for purposes of defamation law.”[127] Significantly, this means that Section 230 applies to both publishers and distributors, as the distinction was used in Cubby and Stratton Oakmont.[128]

Next, Zeran argued because he had provided notice to AOL regarding the postings, AOL would be liable as having notice of the offensive postings.[129] The court refuted this argument by pointing to the policy purposes of Section 230 and the chilling effect a notice requirement would have against incentives to restricting offensive speech.[130] As such, Section 230 applied and AOL was immune from liability.

The impact of Zeran was paramount. Virtually every claim against AOL or any other online service provider failed based on Section 230 immunity. This protection against liability allowed some of the largest companies in the United States to form, such as Google, Yahoo, eBay, Microsoft, Amazon, Wikipedia, and Yelp. Most of these companies are also based in the Ninth Circuit, where the vast majority of Section 230 claims have provided immunity.[131] The policy goals of Section 230 were working; the internet was growing, and the economy was expanding. One study by Cristian M. Dippon estimated that the United States would lose over forty billion dollars in GDP annually without the legal protections for these companies.[132]

C. Internet Erosion—The Exceptions to Section 230

Eventually, the internet started to fall out of favor with the public. This decline in internet exceptionalism grew with several unsuccessful but very sympathetic plaintiffs. For instance, consider Ellen Batzel, where false posts claiming she was the granddaughter of a Nazi art thief resulted in her business and reputation being severely negatively impacted.[133] Further consider Christianne Carafano, a famous actress who received several sexual propositions and death threats to her family based on fake profiles created on Matchmaker.com.[134] Another story includes a minor girl, who lied about her age to create a Myspace account. This girl was later lured and sexually abused by an individual she met through Myspace.[135] These individuals received no relief from the courts based on Section 230 immunity.[136] Eventually, the courts became restless enough to provide narrow exceptions to Section 230 by defining the terms responsible and development as they appear in the Section. The main case providing creating the narrow exception is Fair Housing Council of San Fernando Valley v. Roommates.com.[137]

Roommates.com is an online service provider where users could create profiles and match with others renting out spare rooms.[138] Before subscribers can search for listings, they were required to create a public profile.[139] While creating the profile, users were required to disclose their sex, sexual orientation, and whether they would bring children to the household.[140] They were also required to answer with their preference of the same traits.[141] Users were also provided with an option of providing “Additional Comments” to describe themselves in an open-ended essay.[142] These responses are then publicly displayed on the individual’s profile page.[143] The Fair Housing Council of San Fernando Valley sued Roommates.com, alleging the questionnaire violated the Fair Housing Act’s antidiscrimination laws.[144]

The court agreed that Roommates.com received no Section 230 immunity for creating the public profile.[145] Fair Housing Council claimed that Roommates.com created the questions, and the choice of answers, then designed the website around the responses.[146] Thus, requiring subscribers to answer the questions “unlawfully ‘cause[s]’ subscribers to make a ‘statement . . . with respect to the sale or rental of a dwelling that indicates [a] preference, limitation, or discrimination,’ in violation of [the law].”[147] That is, Roommates.com induced the third parties to express illegal preferences, and Section 230 does not provide immunity for such activity.[148] Here, although the subscribers were the ones ultimately creating the content, Roommates.com helped “developed in part” the profiles.[149] As such, Roommates.com was also an information content provider and could be held responsible for those actions.[150]

Further, Roommates.com could not receive Section 230 protection for their search function.[151]Roommates.com’s search function would filter out discriminatory results, based on the input users included in their initial profile form.[152] The court said this was a means of developing the discriminatory content.[153] Cautious this could lead to misunderstanding what “develops” unlawful content, the court provideed an example of what is not “development” under Section 230. “If an individual uses an ordinary search engine to query for a ‘white roommate,’ the search engine has not contributed to any alleged unlawfulness in the individual’s conduct; providing neutral tools to carry out what may be unlawful or illicit searches does not amount to ‘development’ for purposes of the immunity exception.”[154]

Finally, Roommates.com did not develop unlawful conduct when it asked for “Additional Comments” and is therefore immune under Section 230 for unlawful comments by third party users in their responses.[155] Although Roommates.com encourages users to provide something in the Additional Comments section, it does not encourage discriminatory content created by users.[156]

The takeaway from Roommates.com is that the court carved out a narrow exception to Section 230 immunity: an online service provider becomes an information content provider—and thus not protected by Section 230—if it helps other information content providers develop or is responsible for the unlawful or offensive content.[157] The courts have continued to carve minor exceptions for sympathetic plaintiffs.[158]

IV. Discussion of Options

            Section 230 met the policy goals it was initially created to meet. The internet grew unhindered and without the fear of liability. Silicon Valley grew in large part because the internet was a place where the market could flourish without inhibitors. Online service providers could provide a platform for users to interact and transfer information quickly without the worry of litigation. In other words, Section 230 allowed the internet to produce many educational, economic, and social features enjoyed by hundreds of millions of users. But Section 230 also allowed users to abuse flaws within the platforms without holding those platforms accountable. Victim’s targeted by unidentifiable users—such as Zeran, Batzel, and Carafano—were unable to receive any remedy for the abuse received by these users.[159] Women and minors continue to be sexually targeted via the internet. Social media platforms become the home of election interference and the spread of election misinformation. False reviews of businesses continue to hurt reputations.

            Because Section 230 allowed the internet to fully develop its positive and negative features, commentators are split on whether Section 230 has had a net positive impact on the internet.[160] By looking for the best way to keep the good and remove the bad elements of the internet, many suggestions have arisen on the best way to move forward with internet regulation. These suggestions range from repealing Section 230 and relying on the First Amendment protections, keep Section 230 with no changes and follow the court created exceptions, or provide modest alterations to Section 230.[161]

A. Repeal Section 230 and Rely on First Amendment Protections

            If Section 230 were repealed, the law immediately prior to Section 230 would become the precedent. Cubby and Stratton Oakmont would become the leading cases to shape how internet regulation would look like moving forward.[162] On a case by case basis, the court would need to determine whether defendant online service providers were acting as a publisher or a distributor.[163] If the online service provider is acting as a publisher—meaning the provider is following content guidelines and making editorial selections of which posts are approved—then liability will attach.[164]On the other hand, if the online service provider is not acting as a publisher—there are no content guidelines or editorial decisions—but instead as a distributor, then liability only attaches if the provider knew or had reason to know of the offensive material.[165]

            The positive result from repealing Section 230 would be the increased ease in holding online service providers accountable for offensive material created by users. Online service providers would essentially be put on notice that if any offensive material is reported, they must remove the material or face potential liability.[166] This feature is similar to the way most other countries treat the internet—if a post is offensive, then the provider is liable once they receive notice of the offensive nature and do nothing to remedy the situation.[167]

            Unfortunately, repealing Section 230 could decrease the positive impacts made by online service providers. For instance, if online service providers are provided with less protection against liability for user conduct, online service providers might remove the ability for users to interact altogether rather than face potential liability for user content.[168]Consequently, the positive educational, economic, and social impact of the internet would take a severe blow.

            Additionally, repealing Section 230 could increase the negative material found online. Without Section 230, online service providers receive increased liability if they take the initiative to engage in content moderation.[169] By establishing and following community guidelines, providers would be deemed as a publisher and receive liability for any offensive material that appears on the platform.[170] Simply put, providers are incentivized to have no community standards at all, meaning offensive or dangerous material could make it onto the platform unchecked.[171] Picture a Facebook without fact-checking false information or hiding “sensitive images.”

            Consider the fears of the court in Zeran with a notice based system:[172]

            If computer service providers were subject to distributor liability, they would face potential liability each time they receive notice of a potentially defamatory statement—from any party, concerning any message. Each notification would require a careful yet rapid investigation of the circumstances surrounding the posted information, a legal judgment concerning the information’s defamatory character, and an on-the-spot editorial decision whether to risk liability by allowed the continued publication of that information . . . . The sheer number of postings on interactive computer services would create an impossible burden in the internet context . . . . [Providers] would have a natural incentive simply to remove messages upon notification, whether the contents were defamatory or not.

            Similarly, notice-based liability would deter service providers from regulating the dissemination of offensive material over their own services.

            More generally, notice-based liability for interactive computer service providers would provide third parties with a no-cost means to create the basis for future lawsuits.

Ultimately, simply repealing Section 230 would not resolve the issues most individuals have with offensive material found on the internet.

B. Keep Section 230 with no Changes

            Another option is to keep Section 230 with no changes and to rely on the courts to carve out narrow exceptions for particularly egregious conduct. As the law currently stands in Section 230 jurisprudence, there is a presumption of immunity for online service providers unless they develop or are responsible for content that appears on their platform.[173] Helping develop or becoming responsible for offensive material classifies the online service provider as an information content provider, for which there is no immunity.[174]

            The positive aspect of this option is that the courts and online service providers understand the status of Section 230 as it currently stands. The law is predictable, and plaintiffs and defendant’s alike are aware of their likelihood of success. As the law currently stands, lawsuits are discouraged, and the dissemination of information is encouraged. The educational, economic, and social goals of Section 230 remain intact and unrestricted.

            However, because Section 230 immunity runs so deep, except in rare cases, much of the negative content viewed online would remain in place. Attorneys are required to carefully review any case to evaluate whether the online service provider has developed or is responsible for any of the offensive material, and then they have to make that argument.[175]Consider the negative impact of Backpage, a website were thousands of ads sexually soliciting minors made an appearance.[176] Backpage would edit the ads to remove any references to minors, but the ads would remain largely intact.[177] When several minors were abused by their pimps, lawsuits arose but they were unsuccessful. Section 230 protected Backpage from liability.[178] There was a solid argument the ads were developed by Backpage because the website took several editorial steps, including altering the content that appeared on the ads. However, the lawyers never made the argument that Backpage developed the ads.[179]

            In summary, keeping Section 230 as is would largely immunize online service providers. When there is a case that could go either way, it takes careful lawyering to successfully hold a provider liable for the offensive material.

C. Potential Amendments to Section 230

            A final option is for Congress to statutorily amend Section 230. Although many proposals have been offered,[180]two potential amendments currently stand out. First, instill a duty of care reasonableness standard to qualify for Section 230 immunity.[181] Second, carve out exceptions to block immunity for providers who assist or host content that is particularly reprehensible—specifically targeting online sex trafficking.[182]

            The first option is to create a duty of care by drafting a reasonable standard into Section 230 where online service providers must follow the standard of care to qualify for Section 230 immunity. The proposed language of the amendment was drafted by Danielle Keats Citron and Benjamin Wittes. It reads:[183]

No provider or user of an interactive computer service that takes reasonable steps to prevent or address unlawful uses of its services shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.

The original heading of Subsection (c) of Section 230 provides protection for “Good Samaritan” good faith blocking and screening of offensive material.[184] Significantly, “no Good Samaritan behavior was actually required for the main § 230 immunity to attach.”[185] The proposed amendment would require some Good Samaritan behavior.[186]This amendment is promising because continues to protect online service providers for content moderation but removes the incentives of providers taking a hands off approach to reprehensible content.[187] In other words, the amendment is a strong attempt of encouraging online service providers to remove the bad content while keeping the good parts of Section 230.

However, there are flaws to the reasonable standard amendment. First, the amendment is unpredictable.[188] If the amendment were passed, online service providers would be likely to respond out of a fear of litigation. “Because the proposed amendment conditions immunity on how an online platform behaves, platforms will rush to alter their behavior in a way that allows them to satisfy the new duty of care requirement.”[189] The question then turns to what is considered reasonable? Who defines what is reasonable, the courts or Congress? If the answer is the courts, companies would be required to undergo extensive litigation to persuade a court that their standards are reasonable.[190] If this litigation arises for a small company, there is the possibility that the expenses would bankrupt the company, a result not imagined within the policy purposes of Section 230.[191] There is also the question of what must be reasonable, whether the standard applies to the platform’s design, content guidelines, or response to reported content.[192] If the standard applies to the design, guidelines, and response, does the court need to address what is reasonable for each function of the platform? These are all questions that would need to be addressed and would be expensive to answer.

The second option is to carve out exceptions in Section 230 to address particularly egregious content. Specifically, proposals exist to fight against online sex trafficking.[193] The vast majority of victims in Section 230 cases are women and minors.[194] Creating an amendment designed to protect women and children by removing immunity for online service providers—such as Backpage—who enable online sex trafficking would remove many of the harms found online.[195]

Again, however, there are issues to an amendment targeting online sex trafficking. First, there are other victims, “such as victims of image-based sexual abuse, unauthorized gun sales, terroristic threats, or malicious catfishing” that would receive no protections.[196] Additionally, Congress already attempted to protect sex trafficking victims by passing the Fight Online Sex Trafficking Act of 2017 (FOSTA).[197] The attempt was disrupted by disagreements between party lines and the final draft was sloppy in execution.[198] As a result, the law vaguely created an offense for providers who “conspire with the intent to promote or facilitate the prostitution of another person,” or “act in reckless disregard of the fact that such conduct contributed to sex trafficking.”[199] The impact of the law was also negative against online service providers who did not want to take any chances with the law.[200] Craigslist shut down its personals site two days after the law was passed and told its users, “Any tool or service can be misused. We can’t take such risk without jeopardizing all our other services, so we are regretfully taking craigslist personals offline. Hopefully we can bring them back some day. To the millions of spouses, partners, and couples who met through craigslist, we wish you every happiness!”[201]

Based on the growing partisan division regarding the internet and protection of internet speech, it is unclear how soon—if at all—Section 230 would be amended.

V. Conclusion

            The internet is a place that has benefited free speech, but it is also a place that has harbored many harms. Online service providers have had the opportunity to grow and flourish the economy, bringing with it a user base with unlimited access to information and social interaction. Among the harms, however, include defamation, harassment, online sex trafficking, election interference and misinformation, and terrorist activity. Section 230 has made it possible for both the good and the bad of the internet to grow. Without Section 230, online service providers were actively punished for moderating inappropriate content. The question then becomes, what is the best way to adjust Section 230 to protect the good parts of the internet and encourage online service providers to filter out the bad? Each proposal has merit, and each proposal has flaws. The result is that Section 230 is unlikely to be amended, and the courts will continue to slowly carve out narrow exceptions for online service providers who develop or are responsible for publishing offensive content.

* William Laursen is a J.D. Candidate at Drake University Law School, 2025. B.A., Political Science, Brigham Young University, 2021. All opinions herein expressed, and all errors committed, are the Author’s.

[1] Tom Dreisbach, How Trump’s ‘will be wild!’ Tweet Drew Rioters to the Capitol on Jan. 6, NPR (July 13, 2022, 3:42 PM), https://www.npr.org/2022/07/13/1111341161/how-trumps-will-be-wild-tweet-drew-rioters-to-the-capitol-on-jan-6.

[2] Jude Sheerin, Capitol Riots: ‘Wild’ Trump Tweet Incited Attack, Says Inquiry, BBC (July 12, 2022), https://www.bbc.com/news/world-us-canada-62140410.

[3] Gerhard Peters & John T. Woolley, Donald J. Trump, Tweets of January 6, 2021, The American Presidency Project (Jan. 6, 2021), https://www.presidency.ucsb.edu/documents/tweets-january-6-2021.

[4] X, Permanent Suspension of @realDonaldTrump, X Blog (Jan. 8, 2021), https://blog.x.com/en_us/topics/company/2020/suspension.

[5] Bobby Allyn & Tamara Keith, Twitter Permanently Suspends Trump, Citing ‘Risk of Further Incitement of Violence’, NPR (Jan. 8, 2021, 6:29 PM), https://www.npr.org/2021/01/08/954760928/twitter-bans-president-trump-citing-risk-of-further-incitement-of-violence.

[6] Id.

[7] What we Know About Truth Social, Donald Trump’s Social Media Platform, Associated Press (updated Mar. 27, 2024, 12:48 PM), https://apnews.com/article/truth-social-donald-trump-djt-ipo-digital-world-7437d5dcc491a1459a078195ae547987.

[8] Max Zahn, A Timeline of Elon Musk’s Tumultuous Twitter Acquisition, ABC News (Nov. 11, 2022, 1:21 PM), https://abcnews.go.com/Business/timeline-elon-musks-tumultuous-twitter-acquisition-attempt/story?id=86611191.

[9] Id.

[10] Jeff Kosseff, The Twenty-Six Words That Created the Internet, Cornell U. Press (2019).

[11] 47 U.S.C. § 230 (2024).

[12] 47 U.S.C. § 230(c)(1)

[13] 47 U.S.C. § 230(c)(2)

[14] Id.

[15] See Kosseff, supra note 10.

[16] See id.

[17] See infra Part II.

[18] See infra Part III.

[19] See infra Part IV.

[20] Kosseff, supra note 10, at 77.

[21] U.S. Const. amend. I.

[22] Smith v. People of California, 361 U.S. 147, n. 2 (1959) (Black, J., Concurring).

[23] See infra Part II.B and C.

[24] Smith v. California, 361 U.S. 147 (1959).

[25] Id. at 216.

[26] Kossoff, supra note 10, at 19.

[27] Id.

[28] Id. at 17.

[29] Smith, 361 U.S. at 149.

[30] Id. at 149.

[31] Kossoff, supra note 10, at 21.

[32] Id.

[33] Id.

[34] Smith, 361 U.S. at 149.

[35] Id. at 155.

[36] See id. at 150–54.

[37] Id. at 152.

[38] Id.

[39] Id.

[40] Id. at 153.

[41] Id.

[42] Id. at 154.

[43] Smith v. California, 361 U.S. 148 (Black, J., Concurring).

[44] Id.

[45] Smith, 361 U.S. at 152.

[46] Id.

[47] Kossoff, supra note 10, at 27.

[48] Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev.1598 (2018).

[49] Kossoff, supra note 10. at 37.

[50] Id.

[51] Id.

[52] Id. at 38.

[53] Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 138 (S.D.N.Y. 1991).

[54] Kossoff, supra note 10, at 38.

[55] Cubby, 776 F. Supp. at 138.

[56] Id.

[57] Id.

[58] Id.

[59] Id. at 137.

[60] Id. at 138.

[61] Id.

[62] Id.

[63] Id. at 140.

[64] Kossoff, supra note 10, at 38.

[65] Cubby, 776 F. Supp. at 137.

[66] Id.

[67] Id.

[68] Id.

[69] Kossoff, supra note 10, at 41.

[70] Id.

[71] Cubby, 776 F. Supp. at 140.

[72] Id. at 139.

[73] Id.

[74] Id. at 140.

[75] Id.

[76] Id.

[77] Id. at 141.

[78] Id.

[79] Id.

[80] Id.

[81] Kossoff, supra note 10, at 44.

[82] Id.

[83] Stratton Oakmont, Inc. v. Prodigy Services Co., No. Trial IAS Part 34, 1995 WL 323710 (N.Y.S. May 24, 1995).

[84] Id. at *1.

[85] Id.

[86] Kossoff, supra note 10, at 45.

[87] Stratton Oakmont, 1995 WL 323710 at *1.

[88] Id.

[89] Id.

[90] Id. at *2.

[91] Id.

[92] Id. at *3.

[93] Id.

[94] Id. at *3–4.

[95] Id. at *4.

[96] Id. at *5.

[97] See id. at *3–5; Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 138 (S.D.N.Y. 1991).

[98] See Stratton Oakmont, 1995 WL 323710 at *3.

[99] See id. at *3–5; Cubby, 776 F. Supp. at 138.

[100] See Stratton Oakmont, 1995 WL 323710 at *3; Cubby, 776 F. Supp. at 138.

[101] See Stratton Oakmont, 1995 WL 323710 at *3; Cubby, 776 F. Supp. at 138.

[102] Anupam Chander, How the Law Made Silicon Valley, 63 Emory L.J. 639, 651 (2014).

[103] Kossoff, supra note 10, at 74.

[104] Id. at 61.

[105] Id. at 57–61.

[106] Id. at 60.

[107] 47 U.S.C. § 230(a)(1) (2014).

[108] 47 U.S.C. § 230(b).

[109] 47 U.S.C. § 230(c).

[110] Zeran v. America Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997).

[111] Id.; Eric Goldman, The Ten Most Important Section 230 Rulings, 20 Tul. J. Tech. & Intell. Prop. 1, *3 (2017).

[112] Zeran, 129 F.3d at 329.

[113] Id.

[114] Id.

[115] Id.

[116] Id.

[117] Id.

[118] Id.

[119] Id.

[120] Id.

[121] Id. at 328.

[122] Id. at 329.

[123] Id.

[124] Id. at 331.

[125] Id. at 332.

[126] Id.

[127] Id.

[128] Id.

[129] Id.

[130] Id. at 333.

[131] Chander, supra note 102, at 651.

[132] Kossoff, supra note 10, at 121.

[133] See Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003).

[134] See Carafano v. Metrosplash, 207 F.3d 119 (9th Cir. 2003).

[135] See Doe v. MySpace, 474 F.3d 413 (5th Cir. 2008).

[136] See Batzel, 333 F.3d 1018; Carafano, 207 F.3d 119; Doe, 474 F.3d 413.

[137] Fair Housing Council of San Fernando Valley v. Roommates.com, 521 F.3d 1157 (9th Cir. 2008).

[138] Id. at 1161.

[139] Id.

[140] Id.

[141] Id.

[142] Id.

[143] Id. at 1162.

[144] Id.

[145] Id. at 1164.

[146] Id.

[147] Id. at 1165.

[148] Id.

[149] Id. at 1171.

[150] Id. at 1164.

[151] Id. at 1161.

[152] Id.

[153] Id.

[154] Id. at 1169.

[155] Id. at 1173.

[156] Id.

[157] See id.

[158] Jeff Kosseff, The Gradual Erosion of the Law That Shaped the Internet: Section 230’s Evolution Over Two Decades, 18 Colum. Sci. & Tech. L. Rev. 1, *36–37 (2017); Brent Skorup & Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation, 72 Okla. L. Rev. 635 (2020).

[159] See Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003); Carafano v. Metrosplash, 207 F.3d 119 (9th Cir. 2003); See Doe v. MySpace, 474 F.3d 413 (5th Cir. 2008). See also Ann Bartow, Online Harassment, Profit Seeking, and Section 230, 95 B.U. L. Rev. Annex. 101 (2015).

[160] Samuel Won, A More Reasonable Section 230 of the CDA: Imposing a Pre-Defined Duty of Care Requirement on Online Platforms, 57 Ga. L. Rev. 1413 (2023).

[161] See infra Part IV.A, B, and C.

[162] See Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 138 (S.D.N.Y. 1991); Stratton Oakmont, Inc. v. Prodigy Services Co., No. Trial IAS Part 34, 1995 WL 323710 (N.Y.S. May 24, 1995).

[163] See Stratton Oakmont, 1995 WL 323710.

[164] See id.

[165] See Cubby, 776 F. Supp. 135.

[166] See id.

[167] See Kossoff, supra note 10, at 153.

[168] See id. at 270–71 (discussing how Craigslist removed its personals after a law was passed providing fewer protections).

[169] See Zeran v. America Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997) (outlining the consequences for engaging in good faith content moderation).

[170] See id.

[171] Kossoff, supra note 10, at 56.

[172] Zeran, 129 F.3d at 333.

[173] Fair Housing Council of San Fernando Valley v. Roommates.com, 521 F.3d 1157 (9th Cir. 2008).

[174] Id.

[175] See Kossoff, supra note 10, at 263.

[176] Id.

[177] Id.

[178] Id.

[179] Id.

[180] Gregory M. Dickinson, The Internet Immunity Escape Hatch, 47 B.Y.U. L. Rev. 1435 (2022).

[181] Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, 86 Fordham L. Rev. 401, 419 (2017).

[182] Kossoff, supra note 10, at 252.

[183] Citron & Wittes, supra note 179; Won, supra note 159, at 1429–30.

[184] 47 U.S.C. § 230(c) (2024).

[185] Chander, supra note 102, at 651.

[186] See Won, supra note 159, at 1429–30.

[187] Id.

[188] Id. at 1436.

[189] Id.

[190] Id.

[191] Id. (discussing a company called Voeh, providing services similar to YouTube, that went bankrupt because of litigation expenses).

[192] Id. at 1442.

[193] Kossoff, supra note 10, at 253.

[194] Id. at 209.

[195] Id. at 253.

[196] See Won, supra note 159, at 1427–28.

[197] Id. at 1432.

[198] Kossoff, supra note 10, at 270.

[199] Id.

[200] Id. at 270–71.

[201] Id.

More Articles