The major U.S. trends in AI in 2025 - and what's next in 2026
Technology leaders attend a generative AI (Artificial Intelligence) meeting in San Francisco as the city is trying to position itself as the AI capital of the world, in California, U.S., June 29, 2023. REUTERS/Carlos Barria
What’s the context?
Here's what to know about AI in U.S. workplaces, surveillance, private lives and the courts heading into the new year.
RICHMOND, Virginia - The use of artificial intelligence (AI) swelled in the United States in the past year, from its rise in immigration enforcement to the growth of so-called grief tech that creates realistic facsimiles of deceased loved ones.
What were the major trends in 2025 and what's in store for 2026?
Here's what to know:
Immigration enforcement
As part of immigration enforcement, the administration of President Donald Trump ramped up surveillance and the use of AI tools - from facial recognition to robotic patrol dogs.
AI-assisted surveillance has also been used by contractors to scrape social media for immigrants' personal information, which is fed to federal agencies that use it to locate migrants and make arrest and deportation decisions.
The administration has moved to revoke visas over free speech issues in part by using AI to trawl social media for justification – particularly after the killing of conservative activist Charlie Kirk, said Jacob Hoffman-Andrews with the Electronic Frontier Foundation (EFF).
"We know they are certainly using automated systems – the government has talked a lot about using AI to make this stuff more efficient," Hoffman-Andrews told Context.
The EFF joined with the United Auto Workers (UAW) union and other groups to sue over the government's surveillance of social media.
They say the administration has hampered unions' ability to associate with members, who are wary of maintaining an online presence or being associated with groups or views of which the government disapproves.
Asked for a response, the U.S. State Department press office said it does not comment on ongoing litigation. The Department of Homeland Security did not respond to requests for comment.
"Government use of AI for surveillance is an evergreen topic and is going to be even bigger in a year," Hoffman-Andrews said.
AI and jobs
The increasing popularity of ChatGPT and AI-driven automation has sparked concern among workers that their jobs could be taken over.
More than half of workers in the U.S. said they were worried about the impact of AI use in the workplace, and 32% said they thought it will lead to fewer job opportunities in the long run, according to a Pew Research Center survey released in February.
Some jobs appear to be more vulnerable than others. Professions like journalism, translation and customer service are at a higher risk, while machine and building operators are at lower risk, according to research published this year by Microsoft.
Some fears about AI taking over the workplace may be premature, Hoffman-Andrews said.
"It seems likely that it will in the long term, but I think some predictions of how this would be immediately rolled out everywhere and would put a ton of people out of jobs might go a little slower than we expect."
Innovation tied to AI could displace up to 7% of the U.S. workforce if widely adopted, according to research from Goldman Sachs.
There have been reductions in force, notably in administrative work, as businesses see AI helping improve efficiency, said Calli Schroeder, director of the AI and Human Rights Program at the Electronic Privacy Information Center (EPIC), a Washington, D.C.-based nonprofit.
But they have not always gotten the hoped-for results, she added.
"More and more companies that have ... reduced their workforce after implementing AI are coming to the realization that the AI is not necessarily as accurate as they need it to be or it's not really fit for specific purposes that they had humans doing as roles," Schroeder said.
"My hope is that next year we see kind of a pendulum swing back the other way and people re-hiring for those roles – or at the very minimum realizing that you need a lot of human checks and oversight for any AI system."
AI in grief tech
Once a topic popularized in science fiction, the use of grief tech in which technology creates digital avatars of deceased people for their grieving survivors is rising, notably in places like China.
Generative AI platforms like Midjourney that design digital avatars using pictures, text and various online information have raised thorny legal, moral and spiritual questions.
"Once people become dependent on it, there's the fear they might not be able to ever switch it off," Yukihiro Kashiwaguchi, founder of the Japan-based tech company NIUSIA, told Context.
"It's something that needs to be considered as an issue for AI as a whole."
People beware
People should beware of government attempts not only to regulate AI but to dictate and limit the information and prompts it generates, Hoffman-Andrews said.
"I expect to see a lot of governments telling AI companies 'Your AI can't say this, your AI can't say that,'" he said.
Reproductive rights could be a major area of contention as the government may move to restrict access to information about abortion by trying to control what information is available through generative AI searches and prompts, he added.
U.S. lawmakers are looking to crack down on the availability of AI chatbots for minors after complaints that their use could have pushed children to suicide and involve sexually explicit content.
"We in Congress have a moral duty ... to prevent further harm from this new technology," said Sen. Josh Hawley, a Missouri Republican who has introduced a bill that would ban AI companies from providing AI companions to minors.
Character.AI announced it would no longer allow users under 18 to engage in open-ended chat with AI on the platform after a Florida mother sued over the suicide of her teenage son.
"I think we're going to see a lot more next year in proposals on how to particularly protect teenagers that are interacting with chatbots but in general try to address some of the mental health harms we've been seeing," Schroeder said.
(Reporting by David Sherfinski; Editing by Anastasia Moloney and Ellen Wulfhorst.)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles