How are Trump's policies affecting global AI safety laws?
U.S. President Donald Trump reacts as he delivers remarks on AI infrastructure at the Roosevelt room at White House in Washington, U.S., January 21, 2025. REUTERS/Carlos Barria
What’s the context?
Trump's AI policies risk harming users and could drive changes in global legislation, experts say.
LONDON - President Donald Trump's AI policies, which signal a shift away from the previous administration's focus on preventing bias and safeguarding risk, could make users more vulnerable and influence legislation abroad, experts say.
While former president Joe Biden took steps to reduce risks associated with artificial intelligence, Trump has thus far focused on ensuring U.S. dominance in the field, moving to ditch some protective guardrails.
Experts say U.S. policies are likely to influence legislation and policies in Europe as countries jockey for dominance in the fast-growing sector.
Here's what you need to know:
What has Trump done on AI since returning to power?
Trump signed an executive order on Jan. 25, to "enhance America's dominance in AI to promote human flourishing, economic competitiveness, and national security" after revoking Biden's 2023 executive order on AI.
Biden's executive order sought to reduce the risks AI poses on consumers, workers and national security. It required developers to share safety test results with the government and address related chemical, biological, nuclear, and cybersecurity risks.
Trump's order said AI systems should be "free from ideological bias or engineered social agendas", which experts have interpreted as signalling a move away from preventing social bias and discrimination.
"We expect the Trump administration will favour minimal domestic and international regulation on AI," said Seán Ó hÉigeartaigh, who heads a research institute at the University of Cambridge focused on the risks of emerging technologies.
"It seems clear that concerns that had traction with the previous administration such as reduction of bias in AI will not be priorities," he said.
Will Trump's policy affect the safety of AI and other tech?
At the Paris AI Action summit this month, Vice President JD Vance criticised laws governing the sector, saying "massive" regulations could strangle the technology.
The comments come as U.S. officials are assessing Europe's approach to regulating the tech sector more broadly.
In February, U.S. House Judiciary Chair Jim Jordan demanded EU antitrust chief Teresa Ribera clarify how she enforces the European Union's rules reining in Big Tech, saying they appear to target U.S. companies.
The request came two days after Trump signed a memorandum warning that his administration would scrutinise the EU's Digital Markets Act and the Digital Services Act "that dictate how American companies interact with consumers in the European Union".
Britain's decision to join the U.S. in not signing a declaration on inclusive and sustainable AI at the Paris summit reflected U.S. priorities, said Michael Birtwistle, associate director at the Ada Lovelace Institute, an AI and data research institute.
"The (British) government appears to be signalling it no longer sees bias and discrimination as a priority concern," he told Context.
Britain's Department of Science and Technology did not immediately respond to Context's request for a comment.
European lawmakers approved the European Union's AI Act last year, which aims to ensure such systems are transparent and respect existing laws on privacy and fundamental rights.
In February, The EU scrapped draft rules regulating AI, technology patents and consumer privacy on messaging apps after intense lobbying by industries and Big Tech.
How are tech companies responding to the U.S. policy shift?
While experts say it is too soon to see widespread changes as a result of Trump's actions, some companies have shifted key policies.
Google changed its responsible AI principles in February. It removed previous language that said "we will not design or deploy AI" in various areas including weapons, "technologies that cause or are likely to cause overall harm" or "technologies that gather or use information for surveillance violating internationally accepted norms."
Tech billionaire Elon Musk, who is a "special government employee" in the Trump administration, has also promoted his own AI chatbot Grok by demonstrating its ability to call users slurs.
He has previously claimed competing companies, such as OpenAI, are "training AI to be woke."
Ó hÉigeartaigh said he was worried the administration's push against regulations would encourage leading U.S.-based companies to stop developing safety and security frameworks.
"These frameworks play a crucial role in ensuring that frontier AI systems are properly monitored with adequate safeguards in place," Ó hÉigeartaigh told Context.
However, some experts said in future safety frameworks could be valued precisely because they were not so ubiquitous.
"Like safety innovations of the past, AI safety will become a differentiator at a product level," said Ryan Carrier, executive director of ForHumanity, a non-profit that examines existential AI risks and develops corporate-specific solutions.
"AI systems that have meaningful harms (in terms of severity and likelihood) cannot survive, regardless of governmental policy or legislation, the users and impacted persons will not allow it over time," he told Context.
What are the risks to users and countries?
Since Trump's inauguration, the U.S. government has sidelined the AI Safety Institute, which was founded during Biden's administration and tasked with measuring and countering risks from AI systems.
Its inaugural director, Elizabeth Kelly, departed her role on Feb. 5, amid mass layoffs in other government departments.
Ó hÉigeartaigh said that reducing the number of researchers looking into AI safety could increase the risks that criminals or hostile countries may exploit weaker AI safety frameworks.
"We run the risk of models vulnerable to attack being used across our societies and infrastructure. We also run the risk of models being adapted and misused for sophisticated phishing and cyberattack operations."
(Reporting by Adam Smith; Editing by Ana Nicolaci da Costa.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Disinformation and misinformation
- Tech regulation