The real problem with Twitter

Small figurines and Twitter logo are seen in this illustration taken November 4, 2022
opinion

Small figurines and Twitter logo are seen in this illustration taken November 4, 2022. REUTERS/Dado Ruvic

The problems at Twitter show a wrong turn for the wider ideal of social media, with greater control concentrated in fewer hands

Andrew Stroehlein is Europe media and editorial director at Human Rights Watch.

One of the great advantages of social media is the way it has democratised communication.

Not that everyone has an equal voice, but at least these tools are available for everyone to try to get their message across. Unfortunately, we’ve been watching that ideal crumble at Twitter.

In just the few weeks since Elon Musk took over, a cloud of uncertainty has shrouded the platform. The loss of key staff responsible for safety, accessibility, and human rights is cause for deep concern. And with the number of trouble-shooting engineers reduced dramatically, there are fears for the technical stability of the platform itself.

However, the problems of the past few weeks reveal something more fundamental: they show a wrong turn for the wider ideal of social media itself.

Go DeeperTwitter: first, consider human rights impacts
The Twitter App loads on an iPhone in front of blurred out skyline
Go DeeperTwitter battles India for control of social media content
People hold Hamas flags as Palestinians gather after performing the last Friday of Ramadan to protest over the possible eviction of several Palestinian families from homes on land claimed by Jewish settlers in the Sheikh Jarrah neighbourhood, in Jerusalem's Old City, May 7, 2021. REUTERS/Ammar Awad
Go DeeperInstagram, Twitter blame glitches for deleting Palestinian posts

The recent ownership change has really only highlighted a worrying evolution at Twitter, and Facebook before it, towards more control in fewer hands. Even before Musk took over Twitter, the platform had been increasingly dominated by design decisions that have undermined the more democratic nature of Twitter’s original ideal.

Above all, perhaps, is the fact that algorithms decide what gets seen, and, as with so many websites, Twitter’s algorithmic imperative is to maximise “user engagement”. This tends to amplify sensational content, most worryingly posts that toe or cross the line to hate speech – and the dangers of amplification of hate speech on social media are all too real.

In deciding what gets shared, the algorithms – or, really, the people who write them with the goal of maximising user engagement – also inevitably shape what is amplified and seen, and what is downgraded and hidden.

Thus, social media has gone from a kind of democratic system - or, at least a fair one, where the time of posting was the only determining factor for what was shared and when – to an oligarchic system where a relatively small group of insiders determine the rules of who and what gets heard.

This pre-dates Musk’s arrival, but it does underline the core problem. It all matters, because Twitter is now such a key focal point in the global information environment. Whether you prefer to call it a “public forum” or “town hall” or “online agora,” Twitter has served as a central point of exchange, both for grass-roots organising among activists on the ground, and for grass-tops conversations among elites globally.

Social media platforms have become central to our public conversations – discussions that are an essential part of democracies. We can’t afford not to have more democratic control over them.

Since companies are unlikely to abandon their abusive business models voluntarily, strong regulation is necessary, grounded in international human rights standards.

Companies have a responsibility to respect human rights, but even before Musk started at Twitter, we lacked even the most basic information to be able to hold it, and other tech companies and their owners, accountable. Musk firing the very people charged with looking after human rights issues makes everything even harder.

The platforms should be more transparent about the algorithms that shape what people see, and how they direct users toward harmful misinformation. The EU's Digital Services Act is a worthy step in this direction. It obligates platforms like Twitter to carry out systemic risk assessments of possible human rights harms stemming from the algorithms, among other things, and to mitigate those risks.

For content moderation, too, there is a model out there in the form of the so-called “Santa Clara principles,” which shape a vision of how to obtain meaningful transparency and accountability. That’s important, because potential harms to users, particularly those from marginalised communities, are enormous, and we’ve seen many times where that’s gone wrong in the past.

At the most basic level, this is about concentration of power. If we accept that Twitter and other tools are such powerful platforms – if they have become central to the way we share information today – then surely control of the speech of millions should not be up to the whims of a few powerful companies or individuals. That’s too much power in too few hands.

The issue with Twitter is not Musk. The issue is respect for human rights and democracy.


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • Disinformation and misinformation
  • Content moderation
  • Twitter
  • Tech and inequality
  • Data rights



Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context