Britain's AI Summit: Five things you need to know
Britain's Prime Minister Rishi Sunak welcomes U.S. Vice President Kamala Harris at the AI Safety Summit in Bletchley Park, near Milton Keynes, Britain, November 2, 2023. REUTERS/Toby Melville
What’s the context?
From the 'Bletchley Declaration' on AI safety to calls for a citizens' assembly on AI, here are the main highlights from this week
- UK govt announces 'Bletchley Declaration' on AI safety
- Calls to monitor AI fairness, privacy, explainability
- Rights groups propose citizens' assembly on AI
At the country house where Nazi Germany's Enigma code was finally cracked, the UK government and world leaders took aim at a new technological puzzle: artificial intelligence.
This week Britain hosted the world's first global AI safety summit - featuring world leaders, tech company executives, academics and nonprofits - at Bletchley Park to examine the risks of the fast-growing technology.
AI experts, rights groups and labour organisations hosted fringe events around the country, warning that the summit's focus on doomsday predictions risks overshadowing the existing harms of some AI systems.
Here are the key takeaways from the UK's AI events:
The 'Bletchley Declaration'
Britain on Wednesday published the "Bletchley Declaration", signed by 28 countries and the European Union, which encourages transparency and accountability from those developing frontier AI technology.
'Frontier AI', as the government defines it, are "highly capable general-purpose AI models that can perform a wide variety of tasks".
The declaration said that AI-related issues such as explainability, fairness, bias mitigation and data privacy need to be addressed - and referenced risks of manipulated content, cybersecurity breaches, and biotechnology.
Elon Musk and Rishi Sunak interview
Billionaire Elon Musk welcomed China's engagement on AI safety and said he wanted to see Beijing aligned with Britain and the U.S. on the subject, speaking in London on Thursday alongside British Prime Minister Rishi Sunak.
Musk and Sunak agreed on the possible need for physical "off-switches" to prevent robots from running out of control in a dangerous way, making reference to "The Terminator" film franchise and other science-fiction films.
Musk told Sunak he thought AI was "the most disruptive force in history", speculating the technology would be able to "do everything" and make employment as we know it today a thing of the past.
Calls for more regulation
The "People's Summit for AI Safety", hosted by journalism nonprofit The Citizens, focused on the role of Big Tech companies - many of which attended the main summit.
Panelists said companies' failure to regulate online harms and disinformation on their platforms makes it unlikely they will provide adequate solutions.
"We reject the idea that Big Tech should be given a forum to report to world leaders progress against voluntary commitments," said Clara Maguire, executive director of The Citizens.
"This is not how regulation works - and it's not how we'll achieve 'AI Safety'."
The group pointed to the absence of regulators like the Competitions and Markets Authority (CMA) or the Information Commissioner's Office (ICO), Britain's data watchdog, at the event.
Carsten Jung, a senior economist at IPPR, a think tank, drew parallels between the self-regulation of the banking sector and the summit because of how pressing the issues presented by AI are.
"We are helplessly behind in terms of our regulatory capacity to actually track those risks," he warned.
A 'citizens' assembly' to tackle AI
Speakers at the AI Fringe event, held at the British Library in London, said new forms of politics were needed to keep AI in check.
"We need to have new forms of politics and citizens' involvement to govern (AI)," said Rich Wilson, head of the Iswe Foundation, a nonprofit on citizen's empowerment.
He suggested that a lottery of people chosen to generate proposals and policies would be able to create better terms than politicians.
Brenda Ogembo, an international advisory board member from non-profit DemocracyNext, said a citizens' assembly would need adequate information and infrastructure to explore multiple scenarios in the way politicians do.
However, the same level of diversification needed for such an assembly has not, she said, translated into research - as most studies about AI originate from the western world.
Global impact
Rights groups and policymakers warned against labelling the Global South as a single entity when considering the real-world impact of AI and how to regulate it.
AI adoption and potential harms look completely different from India to Brazil to African countries, said Linda Bonyo, founder of the Kenya-based Lawyers Hub.
"When we talk about AI in Africa it's not a mainstream conversation, it's pockets of interest," she said at the British Library on Friday, pointing to low levels of awareness about the link between AI and disinformation, for example.
(Reporting by Adam Smith; Editing by Zoe Tabary.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Tech and inequality
- Tech regulation