Are social media firms doing enough to protect LGBTQ+ users?
The Facebook logo is displayed on a banner during the annual NYC Pride parade in New York City, New York, U.S., June 26, 2016. REUTERS/Brendan McDermid
What’s the context?
Social media giants Instagram, Twitter, Facebook, YouTube and TikTok were all given 'failing grade' by rights group GLAAD
- Top platforms must step up action, says rights group
- Policies needed to protect trans users, finds index
- Social firms say they are working to tackle hate
By Lucy Middleton
LONDON - Major social media companies are not doing enough to protect LGBTQ+ users from abuse and harassment on their platforms, an analysis by U.S. rights group GLAAD has found.
The index analysed five leading platforms - Instagram, Twitter, Facebook, YouTube and TikTok - all of which were ranked as having "inadequate" measures to support and protect LGBTQ+ users. Each scored less than 50 out of 100.
"Social media platforms and companies are prioritizing profit over LGBTQ safety and lives," said Sarah Kate Ellis, the president of GLAAD, in an introduction to the 2022 Social Media Safety Index.
All of the social media firms examined have policies to prevent abuse, and platforms said they are constantly reviewing rules and developing their systems to remove harmful content.
So what did the report find and how can social media platforms best protect their users from harmful content?
What are the concerns?
The index analysed platform's performance in 12 areas, from commitments to protect LGBTQ+ users from discrimination to whether they offer pronoun options and control over how data on users' gender and sexuality is collected and used.
All five companies have policies that protect users from attacks based on their gender and sexual identity, the report said, but enforcement was often lacking.
It found that content moderation was inadequate on all five platforms, with automated systems failing to catch all instances and human moderators sometimes slow to respond.
Platforms said they were constantly working to develop and improve their anti-abuse systems.
The report also raised concerns over a lack of policies over referring to trans people by their pre-transition former names, which is known as "deadnaming" and is widely considered to be harassment which undermines trans peoples' identity.
Of the five platforms, only TikTok and Twitter have community guidelines specifically banning "deadnaming" or intentionally referring to people as the wrong gender, the GLAAD report said.
What is the impact on LGBTQ+ people?
Young LGBTQ+ people spent more time online than their peers on average, according to a 2019 study, with researchers finding they look to platforms to explore their identities and connect with others.
Yet, research has shown that LGBTQ+ people are also disproportionately targeted by online abuse.
Four in 10 LGBTQ+ adults do not feel welcome and safe on social media, found GLAAD, which also warned that online hate was feeding into offline harassment and misinformation was driving the growth of anti-LGBTQ+ laws in many U.S. states.
Author and trans rights campaigner Christine Burns said she has blocked more than 6,000 users on Twitter for anti-trans rhetoric and abuse.
"It can be depressing to see, but ... it is a large amount of noise generated by a small number of people," she said.
Is AI flagging of abuse working?
Most social media firms rely at least partly on automated and artificial intelligence-driven systems to identify content which is abusive or otherwise breaching their rules.
With hundreds of millions of users logging on to platforms each day, automated systems can trawl huge data sets to help flag harassment and abuse quickly.
"Human moderation requires tremendous time ... AI content moderation can provide assistive tools to expedite the human judgment process," said Abdulwhab Alkharashi, a computer science researcher from Glasgow University.
However, some users have found ways to get around AI systems.
"If you say something in a really happy tone and it's all about love and then you drop something (harmful) into your messaging, it might not get picked up," said Effi Paul, a social media expert who co-founded digital marketing agency Six20Two.
AI can flag likely hate speech by recognising patterns of speech or emojis based on machine learning.
"These systems can be tricked or bypassed by using complicated synonyms or symbols," said David Berry, a digital humanities professor at the University of Sussex.
"Humans are infinitely adaptable and creative in their language use whereas computers rely on mechanical or probabilistic systems which mean that they will always be behind the curve."
What are the solutions?
The index said that all platforms should improve the design of algorithms to prevent them from circulating harmful content, strengthen community guidelines, and train moderators to understand the needs of LGBTQ+ users.
It called for an end to targeted advertising based on gender identity, and called more data be published on how guidelines are enforced.
GLAAD also gave specific recommendations for each of the five platforms it analysed, urging Instagram, Facebook and YouTube to adopt policies that protects users from targeted deadnaming and misgendering.
What do social media platforms say?
Twitter, YouTube and TikTok are all working with GLAAD.
Twitter said it has introduced features to help protect users, including allowing people to remove their usernames from other people's conversations, and nudging them to pause and consider responses before they post.
"While we have made recent strides in giving people greater control to manage their safety, we know there is still work to be done," a spokesperson said.
TikTok said the platform is committed to ensuring their policies are fair and equitable, and they are continually taking steps to strengthen protections for marginalised people and communities.
"TikTok is committed to supporting and uplifting LGBTQ+ voices, and we work hard to create an inclusive environment for LGBTQ+ people to thrive," said a spokesperson.
A YouTube spokesperson said it had made significant progress in its ability to "quickly remove hateful and harassing content and prominently surface content in search results and recommendations from authoritative sources".
Facebook and Instagram, both of which are owned by Meta Platforms, did not respond to requests for comment.
Facebook said in September that it had spent more than $13 billion in safety and security measures since 2016.
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- LGBTQ+
- Tech and inequality
- Social media
- Data rights
- Corporate responsibility