Q&A: Act now to shield children from AI, says French envoy
Anne Bouverot speaks on a panel moderated by Martin Tisne on 'AI for Good: Charting the Path to Public Interest' at the Thomson Reuters Foundation annual Trust Conference in London, Britain, October 22, 2025. Thomson Reuters Foundation.
What’s the context?
Tech expert says AI could deepen the ills of social media, citing its potential effect on 'self-image, self-esteem, suicide'.
LONDON - AI is only in its infancy but could rapidly exacerbate the noxious side-effects of social media if left unchecked, with children particularly vulnerable, said the French president's special envoy for AI.
Be it eradicating self-esteem or prompting suicidal thoughts - social media has been widely blamed for worsening many of the stresses of modern life, with young users among its top victims.
Throw AI in the mix and the dangers loom still larger, said Anne Bouverot, who led preparations for the Paris AI summit.
The EU's AI Act, the world's first comprehensive set of rules governing the technology, came into force in 2024, and seeks to balance protection and innovation.
Tech that starts out as "innocuous" could end up impacting children and democracies, Bouverot told Context on the sidelines of the Trust Conference, the Thomson Reuters Foundation's flagship gathering of leaders and experts.
We gathered her thoughts on how to get the best from AI without falling prey to its potential downside:
How do you legislate AI in a way that does not stifle growth?
That's a very important balancing act. We need to care for the rights of citizens, the rights of the people, and to have protection.
That's what the EU is doing with the various regulations. At the same time, we need to develop services and solutions that fit our own needs ... which are to stimulate that innovation in a way that is aligned with the values of European citizens.
We announced at the (February) Paris Summit, for example, 200 billion euros ($233 billion) of investment in research, in creating companies, startups, or getting larger ones to use AI.
Europe needs to balance protection with innovation.
What must be done to protect labour rights in the age of AI?
There's a need to look more closely at how jobs are changing, how we need to change the education system to get people ready for the new jobs and to retrain, re-skill the ones whose jobs are being impacted.
Companies or public services need to accompany the movement. We need to have social dialogue and discussions with unions. We need to discuss with workers and employees the process. We need to do training. We need to have social protection systems.
We need to steer AI ... to augment people's jobs and not substitute or replace people's jobs.
Anne Bouverot speaks alongside Nayana Prakash (CL), Martin Jullin (CR), Michele Jawando (R) on a panel moderated by Martin Tisne (L) on 'AI for Good: Charting the Path to Public Interest' at the Thomson Reuters Foundation annual Trust Conference in London, Britain, October 22, 2025. Thomson Reuters Foundation.
Anne Bouverot speaks alongside Nayana Prakash (CL), Martin Jullin (CR), Michele Jawando (R) on a panel moderated by Martin Tisne (L) on 'AI for Good: Charting the Path to Public Interest' at the Thomson Reuters Foundation annual Trust Conference in London, Britain, October 22, 2025. Thomson Reuters Foundation.
France wants a ban on social media for children under 15. Why?
At first, we saw social media develop and we all collectively thought, 'Ah, this is just a nice thing to have. Maybe we like it, maybe we don't like it, but it's quite innocuous'.
And now people are realising that especially for children, there are risks, there are harms: self-image, self-esteem, suicide. There's lots of potential harms. There are some benefits, but we need guardrails around these harms.
Social media became these big platforms that have so much impact on children and on democracies.
In France, we're thinking about how to regulate, also having age-access criteria for social media, and many others are thinking about this.
AI is in its infancy, but we already see huge use amongst children and teenagers. So now is the time when we need to act to prevent similar or new harms and risks.
Companies that develop these AI solutions, and social media as well, are global companies. So we can't only do that at the national level. We also need to go on a broader level.
France will be hosting the next G7 in June next year. That's one of the topics that we're planning to put on the agenda.
What do you see as the upside and risks of AI in coming years?
On the one hand, we have the challenge of adoption, and on the other hand, we have the challenge of sovereignty.
AI has lots of promise in terms of how it can improve access to public services, improve the efficiency of what governments, states and regions provide to their citizens, improve the competitiveness of companies, stimulate new innovation.
For that potential to be realised, you need fast adoption, you need to do it in the right way. You really need to train people, try things; otherwise, you won't get the jobs, you won't get the productivity, you won't get the benefits.
But we need to make sure there's more choice and the ability to have AI solutions developed in a multitude of places.
So the challenge is the balancing act between the two: the adoption, and the sovereignty or the development of our own ecosystems.
This interview has been edited for clarity and brevity.
(Reporting by Lin Taylor, Editing by Lyndsay Griffiths.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Online radicalisation
- Polarisation
- Content moderation
- Tech regulation
- Social media
- Data rights
- Cyberspace