Recently Elon Musk stated that one of his motivations in purchasing Twitter was that he was “against censorship”:
By “free speech”, I simply mean that which matches the law.
I am against censorship that goes far beyond the law.
If people want less free speech, they will ask government to pass laws to that effect.
Therefore, going beyond the law is contrary to the will of the people.
So there are two problems here, both having to do with the meaning of the word “censorship”. One problem is rather technical and pedantic, having to do with the dictionary definition of the word. The other problem, however, is more fundamental and is worth having as a serious debate.
First, I want to say that I don’t dislike Elon Musk. In fact I am quite a fan of Space-X and particularly Musk’s leadership of it. I’m somewhat less sanguine about Tesla (but, full disclosure, I own a fair bit of Tesla stock).
So let’s get the first, more pedantic issue out of the way. Technically, the word “censorship” only applies to actions by the government or some other coercive authority. If the government tells me I can’t print something, or say something, or broadcast over the airwaves — that is censorship.
However, if I own a printing press, I am free to decide who gets to use it. Denying someone the use of my printing press — or any other medium of communication which I own — is not censorship; and in fact my freedom to decide how my printing press is used is exactly the kind of freedom that Musk celebrates.
Thus, actions by media corporations are not, by definition, censorship. Things get a little bit weird when the corporation has a monopoly on communication, such that you have no alternative means to get your message out. In a case like that, the corporation is acting as a kind of quasi-goverment, so the word censorship might be applicable. However, that is certainly not the case with Twitter.
Now, let’s get to the deeper and more interesting problem, which is this: social media platforms like Twitter don’t just convey information; they selectively amplify it. And herein lies the problem.
Twitter and Facebook and all the others decide which posts should be broadcast widely, and which post languish in obscurity. This selective amplification is done using sophisticated algorithms which they have developed, but the ethical principle would be the same even if the decision was made by human beings.
The important thing to realize is that amplification, or the lack of it, is not the same as censorship. If Twitter decides not to broadcast my tweet to a million people, that is not censorship.
This is important because I believe that the algorithms used by social media platforms should be regulated. In fact, I believe that these algorithms are part of what is tearing apart our society right now. I even think that the long-term survival of Western Civilization is at risk because of the way we amplify some kinds of speech and not others.
The algorithms used by social media platforms are designed to “maximize engagement” — to make people use the platforms more. They are designed that way because increasing user engagement increases advertising revenue, which increases company profits. The algorithms are constantly being tweaked in ways that bring ever-increasing profit.
Unfortunately, the most profitable content is content that stokes fear, rage and mistrust. Lies spread faster than truth, because they are novel and intriguing. Conspiracy theories and misinformation are amplified because they are sensational and provoke strong emotions.
Social media platforms are undermining our trust in social institutions and putting people’s lives, health and well-being at risk.
We need to stop the amplification of harmful information. This is not the same as censorship, which would be a total ban on certain kinds of speech. That’s not what is being proposed. What is being proposed is to simply stop amplifying harmful information.
From a technical perspective, there are a number of ways this could work, and a number of interesting solutions that have been proposed by others. One approach would be to put a limit on virality. Posts go “viral” for two reasons: first, because people share that information widely, but secondly, because the platforms (sensing that a post is trending upwards) start “recommending” it to their audience. These two factors together create a kind of feedback loop.
The solution is to break that loop by adding friction and slowing things down. Make it more difficult to share posts that have already been widely shared. Put in time delays so that people have time to think about what they are doing before acting on impulse.
Of course, this might result in a reduction in profit, but then again maybe not. There are lots of people (myself included) who refuse to use social media platforms because there is so much bad information on them. I would welcome the opportunity to engage in civil, deliberative debate with a wider audience. But that’s not going to happen on social media platforms as they are constituted today.
I also want to note that these remedies can be, should be, politically neutral. No political party should fear suffering a disadvantage under a regulatory regime like this.
A neutral approach to reducing the harm of social media platforms is possible, and should be done. And this is not “censorship”.