Ukraine war: What lessons for cyber security and safety?
Members of the Siberian Battalion of the Ukraine's Armed Forces International Legion attend military exercises, amid Russia's attack on Ukraine, at an undisclosed location in Kyiv region, Ukraine October 24, 2023. REUTERS/Valentyn Ogirenko
What’s the context?
Digital Security Lab Ukraine chief explains how people can protect their online security in conflict
More than 600 days after Russia invaded Ukraine, the cyberwar rages on. Russia attacked Ukrainian energy infrastructure with a winter air campaign last year that caused sweeping power cuts for millions of people.
Russian hackers compromised several important Ukrainian organizations, including nuclear power companies, media firms and government entities, according to Microsoft.
Groups like Digital Security Lab Ukraine (DSLU) - who won a Front Line Defenders Award earlier this year - are working to boost activists' and journalists' online security through digital audits and technical support and supplies.
Context asked DSLU's executive director Vita Volodovska about the biggest digital challenges facing Ukrainians, how people can protect their online security, and how this might apply to other conflicts.
What are the biggest digital issues facing Ukrainians during the conflict?
The key challenge for all Ukrainians are the policies of internet platforms like Meta.
Many of the Ukrainian publications are being taken down for violating their terms and conditions by talking about the war - for showing graphic content, or hate speech.
It's very difficult to predict what can be the basis for publications to be taken down - but we do have a helpdesk, and helplines, so we can communicate these cases to Meta and (most of) the time they actually restore publications.
Another issue is misinformation, mostly shared in Telegram channels, and it's very difficult to do something about that.
Journalists face similar issues, as Meta is trying to make the news less politicised and polarised, and so media content is being downranked in the newsfeed.
Another problem is cyberattacks - when the (Russian) invasion started there were many (cyberattacks) targeting human rights and media organisations.
Malware attacks also try to block, encrypt, or destroy information on their devices.
We've seen these same attacks since 2014 - the only difference since the start of the war is the scale of the attacks, but this is a general trend that's been happening for a long time.
What steps can people take to protect their digital security?
With high-risk human rights organisations and media, we try to audit their websites and secure them against the risks they foresee in the future.
We also share information about phishing attacks that are happening, and conduct training like step-by-step instructions on securing devices and setting up good passwords.
With the full-scale invasion, many normal people have an increased interest in digital security. We see lots of online courses on digital security, but it's something that needs to be worked on every day.
Do some of the tactics Ukraine has called for - such as GPS blocking, Meta and Google pulling services in Russia, digital IDs - pose concerns for digital rights in the future, when the war is over?
We don't see concerns over those initiatives - some of these were overreaching and unrealistic, but it was at the beginning of the invasion, and I can understand why it was said.
Meta and Google's sanctions, we think, are helpful to prevent the spreading of disinformation in Ukraine.
Russia funds disinformation campaigns with billions of dollars, and this is something (we) cannot address, we do not have the resources to debunk all this information.
It is challenging to know when bans should be lifted, but the decision should not be taken by the government alone, but with other stakeholders like security services, journalists, human rights defenders, and representatives of civil society.
You can use a VPN in Ukraine and access those websites anyway, so it's not a serious issue, but banning Russian websites could make it more difficult to collect information about illegal actions and violations of human rights.
We do have concerns over AI. The government uses Clearview AI to identify Russian soldiers in Ukraine, and there is not enough transparency on how this technology is used, whether it's used on Ukrainians, and how they plan to use it after the war is over.
To what extent can these digital tools and solutions be applied in other conflicts?
What we see is that the world was not ready to address these threats of disinformation and cyberattacks.
Even the cybercrime treaties going through the U.N. now are more about rules for democratic governments on how they should not violate human rights, rather than effective measures on how you can address the actions of bad actors.
The discussions that are happening around lifting sanctions, around Russia, show that we are afraid of violating human rights - which is right - but we should recognise that they might be violated without the proper reaction to it.
We need to be brave enough to use strict rules, but which are fair and can still be effective in protecting freedom of speech and democratic values.
This interview was shortened and edited for clarity.
(Reporting by Adam Smith; Editing by Zoe Tabary.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Disinformation and misinformation
- War and conflict
- Social media
- Data rights