To understand Section 230, you have to understand how the law worked before Congress enacted it in 1996. At the time, the market for consumer online services was dominated by three companies: Prodigy, CompuServe, and AOL. Along with access to the Internet, the companies also offered proprietary services such as realtime chats and online message boards.
Prodigy distinguished itself from rivals by advertising a moderated, family-friendly experience. Employees would monitor its message boards and delete posts that didn't meet the company's standards. And this difference proved to have an immense—and rather perverse—legal consequence.
Advertisement
In 1994, an anonymous user made a series of potentially defamatory statements about a securities firm called Stratton Oakmont, claiming on a Prodigy message board that a pending stock offering was fraudulent and its president was a "major criminal." The company sued Prodigy for defamation in New York state court.
Prodigy argued that it shouldn't be liable for user content. To support that view, the company pointed back to a
1991 ruling that shielded CompuServe from liability for a potentially defamatory article. The judge in that case analogized CompuServe to a bookstore. The courts had long held that a bookstore isn't liable for the contents of a book it sells—under defamation, obscenity, or other laws—if it isn't aware of the book's contents.
But in his
1995 ruling in the Prodigy case, Judge Stuart Ain refused to apply that rule to Prodigy.
Advertisement
"Prodigy held itself out as an online service that exercised editorial control over the content of messages posted on its computer bulletin boards, thereby expressly differentiating itself from its competition and expressly likening itself to a newspaper," Ain wrote. Unlike bookstores, newspapers exercise editorial control and can be sued any time they print defamatory content.
The CompuServe and Prodigy decisions each made some sense in isolation. But taken together, they had a perverse result: the more effort a service made to remove objectionable content, the more likely it was to be liable for content that slipped through the cracks. If these precedents had remained the law of the land, website owners would have had a powerful incentive not to moderate their services at all. If they tried to filter out defamation, hate speech, pornography, or other objectionable content, they would have increased their legal exposure for illegal content they didn't take down.