Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
People work at a site of a new Amazon data centre that is under construction in western Sydney, Australia, September 5, 2025. REUTERS/Hollie Adam
Companies around the world are failing to address the environmental and social risks of AI, a new report finds.
MEXICO CITY - Businesses worldwide are rapidly adopting artificial intelligence but are failing to identify and mitigate its risks to society and the environment, according to a report by the AI Company Data Initiative, the world's largest dataset on corporate adoption of machine learning.
The November report reviews publicly available data on AI policy and adoption in more than 1,000 companies across North and South America, Europe, the Middle East, Africa and the Asia-Pacific region, spanning 13 different sectors.
Most companies do not consider the potential harms of deploying AI and most lack a public AI strategy that helps inform their workers on how they are mitigating the potential risks, the report showed.
"The biggest challenge is to help businesses understand that responsible AI practice can also offer sustainable growth solutions for a business," said Katie Fowler, head of the Responsible Business initiative at the Thomson Reuters Foundation.
The AI Company Data Initiative is part of the Thomson Reuters Foundation, which is also behind the news platform Context.
Nearly all companies sampled did not consider the carbon footprint of AI use, the report found.
Training the models that power AI systems requires massive amounts of electricity for computers stored in data centres, as well as water to cool them. For example, Google's data centres consumed 6.1 billion gallons of water in 2023, according to the company.
These data centres, the physical force behind generative AI tools, are going up in countries like Mexico, where they are exempt from environmental requirements and where local residents are already struggling with water shortages and power cuts.
The social impact of AI systems, whose creation and improvement relies heavily on data annotators who work under precarious conditions in the Global South, was not considered by 68% of firms.
"It's often called the hidden supply chain, because there is so much informal, often forced labour that exists in data enrichment," said Fowler.
Over half of the companies surveyed did not have a publicly available AI strategy or policy, mainly driven by a lack of local regulations on AI.
"We don't know what 'good' looks like, and all that does is breed fear within individual companies to be the first to put (out) their information," said Fowler.
Of those companies that do have an AI policy, only 41% made it available to their employees.
Fowler noted that employees want disclosure of AI policies amid concerns about the technology's role in causing job losses and the changing nature of their work.
Machine learning has already changed entire sectors. Hard hit is the entertainment industry, where professional animators, filmmakers and voice actors are pushing for regulations to protect against AI copycats and the illegal scraping of their work to feed chatbots.
But AI's disruption in the workplace goes much further, said Fowler, with employees concerned about the environmental and social impact of the AI tools they use at work.
"(If) companies really want their employees to engage and embrace the benefits of AI, they're also going to have to do a really good job of helping their employees understand how they're mitigating harms in the process," said Fowler.
The AI Company Data Initiative offers a free survey for companies to map where AI is used across products, operations and services and to compare their performance with their sectors.
(Reporting by Diana Baptista. Editing by Anastasia Moloney and Ayla Jean Yackley.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles