Ukraine conflict: How can open-source intelligence help prove war crimes?
A prosecutor's office member uses a mobile phone inside a damaged school after a missile strike, amid Russia's attack on Ukraine, at a residential area in Kharkiv, Ukraine Jun 2, 2022. REUTERS/Ivan Alvarado
What’s the context?
A non-profit is using open-source intelligence (OSINT) to document starvation war crimes in Ukraine
- Russia using civilian grain to fund war effort
- Satellite imagery and social media key sources of information
- Internet shutdowns, algorithmic bias slow information gathering
From social media videos and photos to commercial data, activists, researchers and journalists are increasingly turning to open-source intelligence to document conflict and gather evidence of possible war crimes.
The OSINT sector has boomed in recent years with the development of tools aiding data analysis, and is now crowded by online sleuths - who often work together as they try to verify information.
Gathering such information from publically available sources, Netherlands-based non-profit Global Rights Compliance (GRC) found that Russia used grain to fund the war effort - purposefully denying food to civilian populations.
Russia allegedly seized control of grain elevators, road and rail infrastructure, posts in occupied territories, as well as from privately owned Ukrainian corporations, which "likely constitutes the war crime of pillage," GRC wrote in a report published in November.
Context asked legal advisor Rebecca Bakos Blumenthal about how the organisation gathers information and uses OSINT in its work; how it navigates challenges like internet shutdowns and online censorship; and the impact artificial intelligence (AI) could have on OSINT.
How are you using OSINT in your work?
On the ground in Ukraine access is often impractical, due to an area being occupied, so exploiting that digital space is a crucial element.
Our report, 'Agriculture Weaponised', which was published in November, details systematic grain extraction, seizure, and transport, from Luhansk and Zaporizhzhia, by Russian forces and affiliated non-military actors.
We looked at satellite imagery and user-generated content on social media platforms, layering those together to verify those instances.
It starts from a basic Google search, but the platforms we've used the most are obviously Telegram, but also Twitter (X) and VK (Russia's version of Facebook).
This can help us find victims or witnesses, as well as helping track alleged perpetrators and monitor statements and whereabouts.
This can also become direct evidence (of war crimes), if it's images or videos, which can be verified by geolocation or chronolocation.
Have you faced any challenges in getting information out of Ukraine?
One of the biggest challenges is ensuring you know how platforms like Telegram and X work, and resisting your own bias. We have amazing Ukrainian investigators and lawyers on our team, and OSINT experts that work with translators.
If you don't speak the language, you're going to miss out on loads of content.
Algorithmic bias is also a concern. If you enter a search term, you might get big headlines from newspapers, whereas if you're researching specific content, you might get more niche results because your algorithms are tailored to it.
That has positive and negative aspects. The positive is you don't have to spend a day looking for what you're seeking, because the algorithms identify you as interested in that.
But on the other hand, it might risk excluding certain information.
Tools change in a heartbeat, so it's important to be adaptable and not rely on a certain tool or platform.
How do you deal with internet shutdowns and media blackouts?
This is something we've faced in Ukraine, but in Ethiopia as well, when Tigray was completely besieged.
There were consistent communication shutdowns, you had to be good at using satellite imagery as well as being ready to monitor when something does come out - and preserve it as soon as possible, before it gets removed.
This aspect of (content) removal is also a problem with social media platforms. If posts violate their terms because of violent content, it's really important for those to be preserved for justice processes.
Have generative AI, deepfakes, and disinformation impacted your work?
With deepfakes (the manipulation of facial appearance through AI), to date, there is more paranoia than there is risk.
But as the technology gets better it will likely get more difficult to recognise those.
When we do OSINT investigations, the propaganda and disinformation or misinformation can be incredibly helpful for us.
We can infer the intent to do something, the rhetoric and propaganda aspect, and the information that circulates as a result of that.
For example, when an attack on a certain shelter or infrastructure occurs, we've sometimes seen statements come out accusing the other side.
In certain cases, there might be pre-emptive statements before an attack happens. You need to be very careful with what you can infer from that, but if you layer it together with other pieces of information it can give valuable insight.
This interview has been shortened and edited for clarity.
(Reporting by Adam Smith; Editing by Zoe Tabary.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Internet shutdowns
- War and conflict
- Social media