Donald Trump is attacking Section 230 of the Communications Decency Act. This is a really bad move. He doesn’t understand it at all, and unfortunately, neither do many Republicans who’ve become vocal about this lately. This will likely make things worse for conservatives (and everyone) on the internet, not better.
It’s hard to find a non-partisan take, but The Guardian does well here. The narrative is mostly conservatives complaining about censorship (which is not always ideologically motivated, but often just the impossibility of content moderation at scale), or liberals complaining about conservatives (ignoring real problems with deplatforming stifling debate, even among progressives, as though only nazis are being censored). Missing from the discussion is an objective analysis of this key piece of tech policy for the internet.
Before Section 230, internet platforms had to choose – either no moderation and no legal liability (like a phone company), or if they were moderating they could face legal liability for the speech of their users that they didn’t moderate. Section 230 protects the platforms so they can’t be held responsible for misconduct of their users. So if someone makes a death threat online, you can’t go after Facebook as the platform, even if they moderate some posts and didn’t delete that one – you have to go after the actual party responsible, the person who made the death threat. It’s common sense law.
There are two huge misunderstandings:
Myth #1: Section 230 is what allows platforms moderate speech.
Nope. The first amendment does. Social media companies are private entities, so their right to free speech forbids the government from restricting their speech. Repealing Section 230 doesn’t affect a platform’s ability to moderate.
Users have no free speech rights on social media because it’s not government property, it’s private like a shopping mall. That’s a separate problem – that the online public square is a private square. This has nothing to do with Section 230.
Myth #2: Removing the Section 230 liability shield will hold platforms accountable for unfair moderation practices.
Nope. The first amendment protects that right. There’s no obligation to be “fair” when exercising your first amendment rights (though it’s important to note that Twitter also used the mechanism that Trump is angry about to defend Mike Pence). Rather, 230 protects platforms for being held legally responsible or facing nuisance lawsuits for the actions of their users, if the platform engages in moderation. It’s an incentive to moderate without a heavy hand.
Without 230, things will get worse. Think it about: if platforms can be sued for the speech of their users, do you think platforms will become more tolerant and leave more posts up? Or do you think they’ll be more risk averse and be more heavy handed in taking content down that they might get sued over?
Removing Section 230 protections means platforms can be held legally liable for the speech of their users. It means they’ll be more likely to censor and delete it. Rather than putting links beside Trump’s posts and risk being sued over what they believe contains misinformation, they’ll just exercise their constitutional first amendment right to delete the posts and avoid legal liability. The first amendment protects their right to take stuff down. Without 230, they can get punished for leaving stuff up! They’re gonna take more stuff down without 230. (And the notion that 230 could be amended to ensure “fairness” ignores the first amendment, and the history of Republican opposition to the FCC fairness doctrine…)
The fight against Section 230 from conservatives (and liberals) is very misguided, and if successful, will likely backfire. What’s needed is a more nuanced discussion about the challenges of content moderation and free speech on a corporate internet.
4 thoughts on “The attack on Section 230 will backfire”
Before Section 230, internet platforms had to choose – either no moderation and no legal liability (like a phone company), or if they were moderating they could face legal liability for the speech of their users that they didn’t moderate.
*****
Removing Section 230 protections means platforms can be held legally liable for the speech of their users. It means they’ll be more likely to censor and delete it.
**********
Hi Blaise,
In the second quote, I think you have overlooked the first. That is to say : “they’ll be more likely to censor and delete it” UNLESS they choose to provide “no moderation and no legal liability (like a phone company”
I also was worried about this, but on longer reflection I realized that the choice is the key : There is a huge market for things like the ideal notion of Facebook or twitter which really would allow us to speak freely. Any company which would wish to benefit from that market (in a post 230 world) would have to opt for the phone company model. Any body wishing to moderate would have to go “all the way” in order to protect themselves.
As I see it now, removing 230 would allow all of us to get what we want.
Best,
Gordon Friesen, Montreal
http://www.euthanasiediscussion.net/
Unfortunately, I think that’s just not true at all in practice.
Can you name a website you’ve been on that has NO moderation at all? Not for pornography? Not for legitimate harassment?
Not for spam? Spam filtering is content moderation.
I think the experiment of Parler – a “free speech” alternative to Twitter – proves the point. Forget for a second that it has little traction and won’t last more than a few years (Identica, Diaspora… this isn’t the first time someone’s tried to create a competitor to Twitter/Facebook), just look at the content moderation polices. Parler started out saying it wouldn’t moderate anything or ban anyone, and by June it was already doing so:
https://www.techdirt.com/articles/20200627/23551144803/as-predicted-parler-is-banning-users-it-doesnt-like.shtml
I agree that there is a market for platforms with stronger support for free speech. The problem is, that still requires moderation. Nobody actually wants to be on a platform where there’s no spam filtering. Spam filtering is content moderation.
I agree that I’d love to see and many others would love to see platforms with less content moderation. But, in practice, nobody actually wants to be on a platform with no content moderation. It gets pretty useless pretty quickly.
Thanks for that Blaise. I now realize that the question is not so easy to solve.
However, I don’t think past cases of unsuccessful competition to facebook and twitter are relevant, because the control component was not so evident before. Today, the problem really is one of complete information control.
As you say, stronger free speech is desirable. And considering that so many people can no longer function on the big platforms, it seems to me that the market will indeed speak. And the more they abuse the situation, the more market incentive there will be.
Therefore 230 does have an advantage, in that, if as you say, the censorship becomes worse, there will be a better chance for alternatives to survive.
Also, dissidents could in some cases get legal recourse for abusive content directed at them, as epitomized by that catholic kid who successfully sued CNN, over that ridiculous episode in front of the Capital.
Best,
Gordon Friesen, Mopntreal
http://www.euthanasiediscussion.net/
I do agree that competition is ultimately the answer!
I think that’s a mistake a lot of people make in trying to counter “big tech” with regulation to “harm” them. The thinking goes, “big tech” is harmful and causing problems, so we have to do something to hurt them and reign in their power. Problem is, Plan A for “big tech” is to be unregulated, but Plan B is to be regulated in a way that puts their competitors out of business. That is, a lot of the regulations designed to control big tech end up entrenching those companies because they’re the only tech companies big enough to comply with those sorts of regulations.
In general, I think the right move is to look at policy that encourages competition rather than imposes restrictions on platforms – because it’s typically only big tech that can comply with the restrictions.
I don’t agree with everything Cory Doctorow says, but he talks about concepts like “adversarial interoperability”:
https://www.sitepoint.com/adversarial-interoperability/
Or there was the old Franklin Street Statement about free network services:
https://blaise.ca/blog/2011/08/02/four-criteria-for-free-network-services/
The problem with Facebook’s dominance is that the cost of leaving is high, because you lose access to your social network. Policy that reduces switching costs or the dominance of big players, rather than entrenching their dominance with onerous regulations only they can comply with, seems to be the way to encourage competition. Because I do agree in the end it’s more competition that is necessary and the way to start solving some of these problems…