How did Section 230 become such a key part of that strategy?The answer is largely about Donald Trump, but the story doesn’t start with him. The person who really pioneered the use of Section 230 as a political cudgel is Trump’s onetime Republican primary opponent, Texas senator Ted Cruz. In 2018, during Zuckerberg’s congressional testimony following the Cambridge Analytica scandal , Cruz—who will also participate in Wednesday’s hearing—lectured the CEO about the law. “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum,” he said. If Facebook was systematically censoring conservatives, a baseless grievance often advanced by Republican politicians, it would jeopardize its eligibility for that immunity.
Cruz’s claim helped push Section 230 into the debate over tech regulation, even though it was rank nonsense. Section 230 has nothing to do with political neutrality. It was added to the 1996 bill to solve a very specific problem. In the early days of the internet, interactive websites—message boards, sites with comments, and so on—faced a legal conundrum. A federal judge had ruled that websites that took steps to moderate user posts would become liable for the content of those posts. The effect was a bit like saying a swimming pool owner is only liable for people drowning if he goes to the trouble of hiring lifeguards. It set up a terrible incentive structure in which websites would have to choose between allowing everything, including hateful and obscene content; trying to keep their platforms clean while assuming an enormous legal and financial risk; or banning all user-generated content.
Section 230 of the Communications Decency Act protects “interactive computer services” like Facebook and Google from legal liability for the posts of their users.We could start by limiting Section 230 and making the platforms responsible, like any other publisher, for content they decide to promote and amplify.
Two members of Congress, Republican Chris Cox and Democrat Ron Wyden (now a senator), drafted Section 230 to fix those incentives. It has two important parts. The first says websites won’t be treated “as the publisher or speaker of any information provided by another information content provider.” In other words, they can’t be held liable for the material posted by users. If I slander you on Twitter, you can sue me, but you can’t sue Twitter. The second part says that websites are free to moderate content on their platforms without losing the immunity provided by the first part. That solves the lifeguard problem.
The arrangement held up without much controversy for two decades. But in the post-2016 backlash to Big Tech, Section 230 picked up critics on both sides of the aisle. Basically, some Democrats don’t like the first part of the law, while some Republicans don’t like the second part. Democrats want the platforms to police content more, Republicans want them to do it less. Democrats were particularly outraged last year when social media platforms refused to take down a viral video of House speaker Nancy Pelosi that had been doctored to make her seem to be slurring her words and a misleading Trump campaign ad attacking Joe Biden. In an interview with The New York Times editorial board in January 2020, Biden said Section 230 should be “revoked immediately.” (He has had little to say about the law since then.) Republicans, on the other hand, fear aggressive enforcement of platform content policies, sensing that a harder line against misinformation and hateful content will land more heavily on their supporters. They also see content moderation as a stalking horse for Silicon Valley’s liberal-leaning employees to unfairly discriminate against conservatives.