European values can help us navigate AI

Salesforce's slogan related to Artificial Intelligence (AI) is displayed on a window during the 54th annual meeting of the World Economic Forum in Davos, Switzerland, January 16, 2024. REUTERS/Denis Balibouse
opinion

Salesforce's slogan related to Artificial Intelligence (AI) is displayed on a window during the 54th annual meeting of the World Economic Forum in Davos, Switzerland, January 16, 2024. REUTERS/Denis Balibouse

Europe can help set global standards for trustworthy AI to safeguard human dignity and fundamental rights

Jan Peter Balkenende is a former Prime Minister of the Netherlands and Professor Emeritus at Erasmus University Rotterdam.

The promise and perils of artificial intelligence (AI) have been top of mind at every recent international gathering, including Davos. And while Europe and European companies may not be at the forefront of this technological revolution, I believe they are uniquely placed to shape its future.

We all sense that AI will have huge consequences even if we cannot yet fully grasp its potential to do good or harm. Of course, I see the positive attributes of AI – its use in education, or to accelerate medical research – but there is also a dark side. I am thinking of the threat that deep fakes pose to our democracies and to our grip on reality; of the jobs that will be destroyed if we allow machines unfettered access to our data; and of the dangers of allowing algorithms to determine our credit-worthiness or likelihood of committing fraud – as is already happening in the Netherlands.

This month, the World Economic Forum labelled AI-powered misinformation the biggest near-term threat to the world. Due to the potential of this dark side, it is vital that we get our approach to AI right. The fallout of getting it wrong – for human rights, social cohesion, political stability, prosperity, even for the European project itself – does not bear thinking about.

To get it right, technical standards and risk assessments are important, but will not be enough. AI is a domain where ethics and technology need to go hand-in-hand; we need a moral framework for safe AI, so we can trust the technology as it develops. For that we need strong values – and this is where I think Europe can play a unique role.

A woman holds an umbrella while walking along a flooded street during heavy rain in Dhaka, Bangladesh, June 12, 2023. REUTERS/Mohammad Ponir Hossain
Go DeeperWEF says world faces 'gloomy outlook' as AI, climate threats rise
A security camera is seen at the main entrance of the European Union Commission headquarters in Brussels July 1, 2013. REUTERS/Francois Lenoir
Go DeeperWhat does the EU's AI act mean for human rights?
Private jets are seen on the tarmac of Nice international airport, France, September 6, 2022. REUTERS/Eric Gaillard
Go DeeperAs leaders fly to Davos, how do private jets fuel climate change?

Who owns your data?

It would be reassuring if the US, China and Europe saw eye to eye on how to develop AI safely and ethically, but this is not the case. Consider how the treatment of data differs. Whereas data in the US is primarily owned and can be exploited by commercial companies, and in China privacy data is de facto owned by the state, in Europe we believe that personal data should belong to the individual.

The European approach is to protect fundamental rights, ensuring freedom not only from market exploitation but from government control as well. This was the raison d’être for the EU’s General Data Protection Regulation directive (GDPR), which governs the collection and handling of data by companies active in Europe.

Of all the global powers, only Europe has set out to protect its citizens and their data against the exploitative and monopolistic practices of Big Tech companies. And in doing so, it has managed to set the global standard for personal data protection (at least for companies wishing to do business in Europe). It is an example of how strong European values, based on human dignity, can contribute to the creation of standards and safeguards in the rest of the world.

We need these principles and values to guide the development of AI. Left unchecked, it will race ahead with no moral compass.

Safeguarding citizens

Philosopher Govert Buijs and I argue that Europe needs to double down on its values – reformulating them if necessary – in the light of new challenges that include climate change, rising inequality, geopolitical turmoil and of course, new technologies. We argue that Europe once more needs to take on the task of mitigating capitalism, both for the sake of social fairness and inclusivity and to safeguard the rights of future generations. We emphasise the importance of ‘reconnection’ because we have experienced in recent decades the limitations of a purely market-based approach to economic development: how, in particular, this market triumphalism has exacerbated tensions between generations, between social groups, between European nations, between Europe and the other parts of the world, and between our economies and the natural environment.

We need to be able to articulate our values clearly to make the right choices for the future of our societies. When it comes to AI, this means being able to answer yes to a series of questions. Will it promote social inclusivity? Will it be transparent? Will it respect the right to the privacy of individuals? Will it have built-in safeguards against discrimination of all kinds? Will it create wealth and opportunity fairly and equitably?

Unless Europe can shape the future of AI in its own image, the danger is that the continent will become the passive recipient of cultural and ethical mores embedded from elsewhere.

A new covenant

But if Europe can put in place ethical safeguards to accompany the development of AI, then our rights and values as a whole will be stronger. In Capitalism Reconnected, we argue that the strengthening of values could become the basis for relaunching the European project (following the foundational phase that secured peace, and the second phase that built the common market).

This new phase would uphold a new ‘European covenant’ that puts human dignity and freedom at the centre of all considerations. We called it a covenant, rather than a contract, because contracts are always conditional. A covenant stands for a bond, regardless of circumstances. It stands for belonging. The covenant would seek to balance economic and environmental priorities and human flourishing, not only in Europe itself but well beyond that continent.

One way Europe could export these values to the rest of the world would be by spearheading a Global Community of Sustainable Technology that provides access to clean technologies and a safe and ethical environment for technological innovation.

I am hopeful. I believe people increasingly understand that if we want to find solutions for the big issues of today – climate change, inequality, security, competitiveness – we need each other. We are stronger together.

Europe needs to defend its values in a multipolar world because perspectives elsewhere on how to deal with this century’s challenges will often be different – technology being just one example. Given the urgency of our ecological, social and technological challenges, there is no time to wait for others to act.


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • Tech regulation
  • Data rights



Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context