AI Action summit in Paris: A stress test for global AI governance

A view shows a surveillance camera as French police start to test artificial intelligence-assisted video surveillance of crowds in the run-up to the Olympics in Paris, France, March 6, 2024. REUTERS/Abdul Saboor
opinion

A view shows a surveillance camera as French police start to test artificial intelligence-assisted video surveillance of crowds in the run-up to the Olympics in Paris, France, March 6, 2024. REUTERS/Abdul Saboor

As the AI summit kicks off, Trump’s comeback, a new geopolitical climate and Europe’s strategic shift are shaking up governance

Lisa Soder is an AI expert at interface, a European think tank focused on policy and technology.

Expectations for next week’s AI Action Summit in Paris are sky-high - partly due to last month’s DeepSeek shock and, of course, to Donald Trump.

This is the first AI summit bringing together world leaders, including U.S. Vice President JD Vance, heads of government, and the global tech elite since the political shift in Washington.

The first two international AI summits, held in Bletchley Park and Seoul, yielded some notable successes: despite strained relations, the U.S. and China signed a joint declaration on AI safety, and tech giants like OpenAI, Mistral, and Google DeepMind committed to greater transparency and stricter security standards.

However, the geopolitical landscape has fundamentally changed since then. Rivalries between global powers - especially the U.S. and China - have intensified, and Europe’s willingness to confront Big Tech seems to be waning.

What, then, can and must the Paris summit achieve under these new circumstances?

Address risks

Initially, these AI summits aimed to address the risks posed by rapidly advancing technology.

Since the last summit in May, the urgency has only grown: according to the AI Safety Report, the latest AI models are approaching the skill level of professional cybersecurity teams, in some cases identifying vulnerabilities faster than human experts.

In biotechnology, they are setting new benchmarks, at times even outperforming PhD-level scientists in planning complex lab experiments.

However, as AI capabilities increase, so do the risks: cyberattacks, deepfakes, and dual-use scenarios in biotechnology pose serious threats to democracies and public security alike.

quote mark

The geopolitical landscape has fundamentally changed

AI applications are also massive energy consumers: by 2026, they could require as much electricity annually as an entire country the size of Austria.

Despite past promises to move beyond non-binding declarations and establish a concrete regulatory framework, little progress has been made.

Meanwhile, the political climate is shifting rapidly.

Trump has reinstated his “America First” doctrine, rolling back AI safety and environmental regulations in one of his first executive actions while threatening protectionist measures.

At the same time, China’s DeepSeek has made spectacular breakthroughs, unsettling companies like OpenAI and Microsoft and further fuelling the AI arms race.

The European Union, once hailed - or feared - as the world’s "super regulator," has recently toned down its ambitions following the Draghi Report on European competitiveness.

The bloc now seems wary of deterring investors and tech firms or provoking Trump’s threatened tariffs.

This new geopolitical climate is reflected in the summit’s agenda: while regulatory discussions remain on the table, the focus has shifted toward innovation, culture, and public-sector AI applications.

These are undoubtedly important issues. However, as the agenda broadens, it becomes harder to enforce binding commitments on the handful of tech giants driving the highest risks.

Window of opportunity

If the summit turns into a mere showcase of successful AI projects, its core mission - setting clear boundaries for powerful tech firms and mitigating AI’s societal and environmental dangers - will be sidelined.

For the Paris summit to be a success, three key points are crucial.

First, a critical assessment is needed to determine whether the self-regulatory commitments made in Bletchley Park and Seoul have led to any real progress.

Without a mechanism to reward compliance, the most irresponsible actors will ultimately benefit, setting the standards for the entire industry.

The AI Action Summit presents an ideal opportunity to scrutinize the actual implementation of safety measures.

quote mark

What is needed are clear transparency obligations

Second, France’s diplomatic finesse will be crucial in bringing the major AI powers to the negotiating table.

The U.S. and China are locked in a high-stakes race for AI dominance. In Washington, officials have compared AI development to a "Manhattan Project," referencing the World War II atomic bomb programme, while China’s DeepSeek breakthrough has been described as a “Sputnik moment.”

As long as both sides frame AI as a power struggle, safety standards will inevitably take a backseat.

This is where Europe, led by France, could play a pivotal role: fostering dialogue and trust-building initiatives could significantly reduce the risk of escalation.

Third, Europe must resist the temptation to abandon its regulatory ambitions in pursuit of global AI competitiveness. Weakening regulation will do little to support European startups and SMEs (small and medium-sized enterprises) - in fact, it may do the opposite.

The ultimate beneficiaries of regulatory rollbacks would be the already-dominant U.S. tech giants. If transparency and safety requirements remain vague, European founders could find themselves drowning in legal liability, while American corporations leverage their vast resources to navigate the landscape with ease.

What is needed are clear transparency obligations. If Europe genuinely aims to become a leading hub for AI innovation and technology, it must establish firm regulatory guardrails.

The Paris summit is a test of whether the international community is willing – and able - to cooperate on AI safety, or whether nationalistic agendas, the pursuit of AI dominance, and geopolitical rivalries will erode the hard-won progress of recent years.

Europe can either mediate between power blocs and leverage its regulatory expertise effectively, or it can pave the way for unchecked AI development.

The risks are clear, the key players are assembled — and the window for meaningful global cooperation may be closing faster than we would like to admit.


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • Disinformation and misinformation
  • Polarisation
  • Content moderation
  • Tech and inequality
  • Data rights
  • Tech solutions



Dataveillance: Your monthly newsletter for a watched world.

By providing your email, you agree to our Privacy Policy.


Latest on Context