Digital Smarts - Understanding Section 230 of the Communications Decency Act

You are here

It will be very interesting to see what effect the new Executive Order that President Trump signed recently targeting Section 230 of the Communication Decency Act has on cyberbullying and misinformation online. While you may have never heard of this section of the law, it was created almost 30 years ago to protect Internet platforms from liability for many of the things third parties say or do on them. But now it’s under threat by President Trump, who hopes to use this act to fight back against the social media platforms he believes are unfairly censoring him and other conservative voices. Some critics say he is trying to bully these platforms into letting him post anything he wants without correction or reprimand, even when he has broken a site’s rules about posting bullying comments or questionable information.

In a nutshell, Section 230 says that Internet platforms that host third-party content (for example, tweets on Twitter, posts on Facebook, photos on Instagram, reviews on Yelp, or a news outlet’s reader comments) are not liable for what those third parties post (with a few exceptions). For instance, if a Yelp reviewer were to post something defamatory about a business, the business could sue the reviewer for libel, but it couldn’t sue Yelp. Without Section 230’s protections, the Internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely be shut down. As Senator Ron Wyden, one of the authors of the Section 230 says about it, the law is both a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet — think cyberbullying that parents or schools struggle to have removed or misinformation that stays online for all to see with little recourse— to flourish along with the best. Simply put, Internet platforms have been happy to use the shield to protect themselves from lawsuits, but they’ve largely ignored the sword to moderate the bad stuff their users upload. It is also important to remember that the cyberbullying that occurs is less than one tenth of one percent of all the traffic online, but it still is important for these sites to acknowledge their role and do more about it.

All that said, this protection has allowed the Internet to thrive. Think about it: Websites like Facebook, Instagram, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. But if that free flow of information and creativity goes away, our online world will be very different.

So where do we stand? While the executive order sounds strict ( and a little frightening with the government making “watch lists” of people who post or support “certain kinds” of content) , legal experts don’t seem to think much — or even any — of it can be backed up, citing First Amendment concerns. It’s also unclear whether or not the Federal Communications Commission has the authority to regulate Section 230 in this way, or if the president can change the scope of a law without any congressional approval.