The news that US billionaire Elon Musk is buying Twitter has sparked a wide range of concerns. One worry has been the emphasis Musk has placed on boosting free speech on the platform. Below are reactions from three different campaigning groups, giving their different perspectives on the issues.
The EFF (based in San Francisco in the US) says it ‘champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development’. Amnesty International is a leading international human rights organisation. HOPE not hate is a British anti-extremism charity whose first priority ‘remains the organised far right, the communities who are susceptible to them and the issues and policies which give rise to them’.
Electronic Freedom Foundation
Jillian C York, Gennie Gebhart, Jason Kelley and David Greene
Elon Musk’s purchase of Twitter highlights the risks to human rights and personal safety when any single person has complete control over policies affecting almost 400 million users. And in this case, that person has repeatedly demonstrated that they do not understand the realities of platform policy at scale.
The core reality is this: Twitter and other social networks play an increasingly important role in social and political discourse, and have an increasingly important corollary responsibility to ensure that their decision-making is both transparent and accountable. If he wants to help Twitter meet that responsibility, Musk should keep the following in mind:
Free Speech Is Not A Slogan
Musk has been particularly critical of Twitter’s content moderation policies. He’s correct that there are problems with content moderation at scale. These problems aren’t just specific to Twitter, though Twitter has some particular challenges. It has long struggled to deal with bots and troubling tweets by major figures that can easily go viral in just a few minutes, allowing mis- or disinformation to rapidly spread. At the same time, like other platforms, Twitter’s community standards restrict legally protected speech in a way that disproportionately affects frequently silenced speakers. And also like other platforms, Twitter routinely removes content that does not violate its standards, including sexual expression, counterspeech, and certain political speech.
Better content moderation is sorely needed: less automation, more expert input into policies, and more transparency and accountability overall. Unfortunately, current popular discourse surrounding content moderation is frustratingly binary, with commentators either calling for more moderation (or regulation) or, as in Musk’s case, far less.
To that end, EFF collaborated with organizations from around the world to create the Santa Clara Principles, which lay out a framework for how companies should operate with respect to transparency and accountability in content moderation decisions. Twitter publicly supported the first version of the Santa Clara Principles in its 2019 transparency report. While Twitter has yet to successfully implement the Principles in full, that declaration was an encouraging sign of its intent to move toward them: operating on a transparent set of standards, publicly sharing details around both policy-related removals and government demands, making content moderations clear to users, and giving them the opportunity to appeal. We call on Twitter’s management to renew the company’s commitment to the Santa Clara Principles.
Anonymous and pseudonymous accounts are critical for users
Pseudonymity – the maintenance of an account on Twitter or any other platform by an identity other than the user’s legal name – is an important element of free expression. Based on some of his recent statements, we are concerned that Musk does not fully appreciate the human rights value of pseudonymous speech.
Pseudonymity and anonymity are essential to protecting users who may have opinions, identities, or interests that do not align with those in power. For example, policies that require real names on Facebook have been used to push out Native Americans; people using traditional Irish, Indonesian, and Scottish names; Catholic clergy; transgender people; drag queens; and sex workers. Political dissidents may be in grave danger if those in power are able to discover their true identities.
Furthermore, there’s little evidence that requiring people to post using their “real” names creates a more civil environment – and plenty of evidence that doing so can have disastrous consequences for some of the platform’s most vulnerable users.
Musk has recently been critical of anonymous users on the platform, and suggested that Twitter should “authenticate all real humans.” Separately, he’s talked about changing the verification process by which accounts get blue checkmarks next to their names to indicate they are ‘verified’. Botnets and trolls have long presented a problem for Twitter, but requiring users to submit identification to prove that they’re ‘real’ goes against the company’s ethos.
There are no easy ways to require verification without wreaking havoc for some users, and for free speech. Any free speech advocate (as Musk appears to view himself) willing to require users to submit ID to access a platform is likely unaware of the crucial importance of pseudonymity and anonymity. Governments in particular may be able to force Twitter and other services to disclose the true identities of users, and in many global legal systems, do so without sufficient respect for human rights.
Better user privacy, safety, and control are essential
When you send a direct message on Twitter, there are three parties who can read that message: you, the user you sent it to, and Twitter itself. Twitter direct messages (or DMs) contain some of the most sensitive user data on the platform. Because they are not end-to-end encrypted, Twitter itself has access to them. That means Twitter can hand them over in response to law enforcement requests, they can be leaked, and internal access can be abused by malicious hackers and Twitter employees themselves (as has happened in the past). Fears that a new owner of the platform would be able to read those messages are not unfounded.
Twitter could make direct messages safer for users by protecting them with end-to-end encryption and should do so. It doesn’t matter who sits on the board or owns the most shares – no one should be able to read your DMs except you and the intended recipient. Encrypting direct messages would go a long way toward improving safety and security for users, and has the benefit of minimizing the reasonable fear that whoever happens to work at, sit on the board of, or own shares in Twitter can spy on user messages.
Another important way to improve safety on the platform is to give third-party developers, and users, more access to control their experience. Recently, the platform has experimented with this, making it easier to find tools like BlockParty that allow users to work together to decide what they see on the site. Making these tools even easier to find, and giving developers more power to interact with the platform to create more tools that let users filter, block, and choose what they see (and what they don’t see), would greatly improve safety for all users. In the event that the platform was to pivot to a different method of content moderation, it would become even more important for users to access better tools to modify their own feeds and block or filter content more accurately.
There are more ambitious ways that would help improve the Twitter experience, and beyond: Twitter’s own Project Blue Sky put forward a plan for an interoperable, federated, standardized platform. Supporting interoperability would be a terrific move for whoever controls Twitter. It would help move power from corporate boardrooms to the users that they serve. If users have more control, it matters less who’s running the ship, and that’s good for everyone.
Amnesty International UK
Responding to news that billionaire entrepreneur Elon Musk has bought Twitter for $44bn (£34.5bn), Michael Kleinman, director of technology and human rights at Amnesty International USA, said:
‘Amnesty International has tracked the disturbing persistence of hate speech on Twitter – especially violent and abusive speech against women and non-binary persons. Our Toxic Twitter report from 2018 found that the platform failed to uphold its responsibility to protect women’s rights online, leading many women to silence or to censor themselves on the platform.
‘We have since released a number of follow-up reports tracking Twitter’s continued lack of progress on this issue. Our most recent report, from December 2021, highlighted several concrete steps that Twitter should take to address hateful and abusive speech against women, of which they have fully implemented only one.
‘Regardless of ownership, Twitter has a responsibility to protect human rights, including the rights to live free from discrimination and violence, and to freedom of expression and opinion – a responsibility that they already too often fail. We are concerned with any steps that Twitter might take to erode enforcement of the policies and mechanisms designed to protect users.
‘The last thing we need is a Twitter that willfully turns a blind eye to violent and abusive speech against users, particularly those most disproportionately impacted, including such as women, non-binary persons, and others.
HOPE not hate
Did you see the news that Elon Musk has bought Twitter? The Tesla CEO, and world’s richest man, has been on a mission to buy the company for some time. Musk is a self-described ‘free speech absolutist’ and his takeover is likely to have significant implications for how Twitter is run.
We already know what absolutist ‘free speech’ social media platforms look like. HOPE not hate has spoken out extensively about the hate and conspiracy theories like Holocaust denial spreading like wildfire on alt-tech platforms like Gab, Telegram and BitChute. In the name of ‘free speech’ they have far less safeguards than traditional social media platforms. In reality less voices end up being heard as hate like racist abuse, rape threats against women and further attacks against LGBTQ+ communities spread more quickly.
When it comes to tackling hate, Twitter is far from perfect. But its current guidelines mean there is a process for removing hate from the platform. As a community, we’ve had real success in getting accounts spreading hate removed. Under Elon Musk’s leadership, this approach might potentially be at risk.
This is why far right hatemongers like Tommy Robinson are already celebrating Elon Musk taking over Twitter. It's because their understanding of 'free speech' is the freedom to hate, abuse, attack and spread racism and lies, and they hope to be able to get back on the platform to do just that.
With the world’s attention on Elon Musk’s takeover, now is the time to take action and call on him to keep hate off Twitter: sign our open letter to Elon Musk now.
We’ve seen time and time again how ill-informed opinions or outright lies like Holocaust denial and race science flood the debate on social media platforms in the name of ‘free speech’ and that ‘he who shouts the loudest’ (and it usually is a 'he') end up drowning out others, especially from minoritised communities.
De-platforming isn’t always the right solution - but in terms of stopping extremists like Tommy Robinson and Britain First spreading their hate on social media, the anti-hate movement has made significant progress over the last few years. With Elon Musk buying Twitter, there’s a real risk that these groups could end up taking over once more.
Together we can keep up the pressure on Elon Musk to do the right thing by keeping hate groups off Twitter - add your name to our open letter if you agree.