Besides AI, regulation key to fight mis/disinformation

A man uses a smartphone as he walks past a poster warning against spreading 'fake news' on the coronavirus
opinion

A man uses a smartphone as he walks past a poster warning against spreading 'fake news' on the coronavirus in Hanoi, Vietnam April 14, 2020. Picture taken April 14, 2020. REUTERS/Kham

While AI has a crucial role to play in controlling what is seen online, government regulations play an important part in determining how such methods are to be used.

By Anya Schiffrin, director of the Technology, Media and Communications specialization at Columbia University’s School of International and Public Affairs.

When worries about online mis/disinformation became widespread after the 2016 U.S. election, there was hope that the tech giants would use artificial intelligence (AI) to fix the mess they created. The hope was that platforms could use AI and Natural Language Processing (NLP) to automatically block or downrank false. illegal or inflammatory content online without governments having to regulate.

Free speech organizations and human rights activists, among others, worried about corporate censorship and unaccountable entities making decisions about what is disseminated online, but for many, it seemed like a convenient way of limiting some of the damage caused by false information online.

Now, it’s clear that while AI has a crucial role to play in controlling what is seen online, government regulations can play an important part in determining how such methods are to be used.

RelatedForget jobs. AI is coming for your water
Students work on computers in the computer lounge at the campus of the University of New South Wales in Sydney, Australia, August 4, 2016
RelatedAs ChatGPT faces Australia crackdown, disabled students defend AI
RelatedCollective power and better auditing can help fix biased AI

There are several startups that use AI, NLP, and Pattern Recognition with Machine/Deep Learning training algorithms to simulate human learning, identify actor networks and analyze traffic patterns to spot accounts that behave as if they use a high level of automation, or might be bots. We found that there was much less of a market for the services provided by these companies than they had originally hoped, and that Google and Facebook largely don’t hire small firms to do this kind of screening.

It’s also become clear that the problem of online mis/disinformation is clearly not one that can be solved just by the market or by technology. In 2017, I proposed that we look at the fixes for online mis/disinformation as supply-side and demand-side solutions.

Some focus on the responsibility of the audiences, while others look at the supply of mis/disinformation online. Using AI to screen and remove or downrank online dis/misinformation is a supply-side solution. But our recent research confirms what many others have found: that algorithms and AI on their own are simply not going to solve the problem.

For one thing, the financial incentives to produce and/or circulate false information online are too great. For another, there are many reasons why people believe or act on information that is false or inflammatory.

“Disinformation and misinformation have been approached as a technical issue. That’s the agenda of the big tech players. But more and more elements are not technical. They are political, economic and regulatory,” said Alejandro Romero, chief operations officer and co-founder of Constella Intelligence, which monitors online disinformation.

When it comes to regulation, Europe is well ahead of the United States. The European Union’s Digital Services Act, approved in April 2022, and the UK’s draft Online Safety Bill, require platforms to conduct risk assessments and explain to regulators how they plan to mitigate the impact of harmful content. The EU’s DSA focuses on risks to society, while the UK bill focuses on risks to individuals. Germany’s NetzDG, introduced in 2017 and revised in 2021, imposes fines on tech companies that show a pattern of disseminating false content.

French regulators say the EU’s Digital Services Act is similar to banking regulation, because rather than supervising every transaction, it requires companies to build systems to mitigate risk.

Hopefully, the new regulations will help companies find the right balance between curbing harmful content and protecting freedom of expression, and the laws may spur innovation too. But firms developing AI/deep learning must keep in mind that authoritarian regimes will likely also use their technologies to prevent the flow of rightful information rather than to improve content safety. So the search for solutions continues.


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • AI
  • Tech regulation
  • Tech solutions


Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context