Asylum seekers may end up rejected by algorithms that could replicate or amplify existing biases, all while giving a veil of machine-made objectivity, Rodelli said.
Campaigners are also raising the alarm over the use of algorithmic tools that can forecast migration flows and may help facilitate people being violently and illegally pushed back from borders.
An MSF report in 2023 described how asylum seekers in Greece faced: "physical violence, including being beaten, handcuffed, strip-searched, having their possessions confiscated, and forcibly sent back to sea”.
Greece’s government has denied it engages in pushbacks.
Another criticism is that the new rules don’t prevent EU-based companies developing harmful AI systems for export with no thought of how that technology might contribute to abuses elsewhere.
"If Europe wants to be a standard-setter globally on a regulation that is human rights-centred, I don't think it's sending the right message," Mher Hakobyan, Amnesty Tech's advisor on AI regulation, told me.
Though rights groups see the AI Act as a missed opportunity to protect all people against the harms of AI, they say their battle is not over.
Expect lawsuits, says Lanneau.
In April, the Greek government was slapped with a fine over AI surveillance in refugee camps when it was found to have violated data privacy regulations. And, in 2021, the Netherlands was gripped by a scandal in which the use of an algorithm led to parents, most of whom were from ethnic minority backgrounds, being wrongly accused of child benefit fraud.
It may be, campaigners say, that legal challenges and public backlash are what will ultimately close the human rights gaps.