Elections to job automation: Five AI trends to look out for in 2024
A man casts his vote at a polling station during Argentina's runoff presidential election, in Tigre, on the outskirts of Buenos Aires, Argentina November 19, 2023. REUTERS/Mariana Nedelcu
What’s the context?
From elections to climate tech, here are our predictions of how AI could reshape the world this year
From job automation to how we consume news, artificial intelligence has upended our daily lives in 2023.
New companies have risen to prominence, businesses have rolled out AI features in popular consumer products, and lawmakers across the world have attempted to regulate such tools.
But as the dust settles, what further changes will AI tech bring? Here are five trends to look out for in 2024:
Jobs and hiring
AI tools are reshaping the workplace and being incorporated into anything from customer service to helping lawyers write arguments, with some predicting they will result in automating jobs away from human beings - such as voice acting.
But not all jobs are at risk of being automated, said experts, citing healthcare as an example.
"While AI (tools) have shown great promise in healthcare sectors, they can't entirely replace medical doctors," Shereen Fouad, a lecturer in computer science from Aston University, told Context.
"They can only support their role by becoming part of their routine in automating administrative tasks, and informing their decisions diagnosing basic cases."
Some companies are also embracing automation in the hiring process itself, including resume screeners that scan applicants' submissions, assessment tools that grade an online test, and facial or emotion recognition tools that can analyse a video interview.
A 'first-of-its-kind class' lawsuit was filed in February in the United States, alleging discrimination by employment algorithms, which are used by the vast majority of employers in the country, according to the U.S. Equal Employment Opportunity Commission.
AI assessment and predictive tools will be "pivotal in autonomously evaluating candidates' skills and forecasting their suitability for job roles", despite concerns around bias and transparency, said Fouad.
"(But) on some occasions, AI-driven recruitment and monitoring systems are trained using historic data that don't represent the population for which the tool is later used," she added.
"This may lead to data bias and potentially discriminatory results. Furthermore, there are some ethical concerns on the privacy and transparency of the AI-driven recruitment processes."
Elections and disinformation
From Argentina to Slovakia and the United States, political campaigns have used generative AI to create promotional material as well as spread disinformation.
Tools such as Midjourney make it cheap and easy to create convincing deepfakes - whether still images or video - that could be used to decieve the public.
Right-wing libertarian Javier Milei won Argentina's landmark election in November, with both camps widely using AI technology to capture voters' attention.
Experts have raised concerns that such actions could result in a "liar's dividend", whereby any negative photos or videos can be dismissed as faked and the public remains skeptical of everything it sees.
"We now have an informational environment in which people doubt the authenticity of even real videos, such as videos of politician scandals," said Kaylyn Jackson Schiff, assistant professor of political science at Purdue University.
That concern is highest for groups that are more susceptible to mis- and disinformation, and in countries where election integrity, authoritarianism, and censorship are more prominent, said Daniel Schiff, also from Purdue University.
This could threaten upcoming elections in countries like the United States, India and Indonesia.
Global regulation
From China to the EU, countries and regional blocs debated what AI advances mean for society and introduced legislation in 2023, some of which is due to come into effect this year.
In Europe, the AI Act will govern AI in general, but rights experts warn the rules do not go far enough, for example in regulating the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes.
China has passed its own rules for AI, saying it must promote the "core values of socialism", while in the United States local authorities have scrambled to put in place 'ethical' AI guidelines in the absence of national laws.
President Biden issued an executive order in October requiring developers of AI systems that pose risks to national security, the economy, public health or safety to share the results of safety tests with the government.
Britain in November held an AI Safety Summit and announced the "Bletchley Declaration", signed by 28 countries and the European Union, which encourages transparency and accountability from those developing frontier AI technology.
Climate tech
As world temperatures continue to rise, governments are turning to AI to cut planet-warming emissions, for example by making manufacturing processes more efficient.
The COP28 U.N. climate summit in Dubai late last year was the first to hold high-level discussions on use of the technology for climate action.
This includes using machine learning to predict floods and wildfires, and optimising solar energy systems to catch the sun's rays.
But AI tools and data for climate action are concentrated in a small number of nations, which can skew data, experts warned, calling for more information to be gathered from Global South countries to make technology like weather prediction software more accurate.
AI's environmental cost, from the amount of energy it uses to the volume of water it needs to cool data centres, is another concern.
It is estimated by reasearchers from the University of California, Riverside that training GPT-3 in Microsoft's U.S. data centres consumed 700,000 liters (154,000 gallons) of clean freshwater, and the amount of compute necessary has increased 300,000-fold since 2012.
Personalised AI tools
Advocates say AI could reduce its energy consumption with smaller, more targeted models.
"Ever-increasing parameter counts are not financially feasible and have huge implications for energy efficiency," said Victor Botev, chief technology officer of research platform Iris.ai.
"Smaller models with high performance represent the way forward."
OpenAI in November unveiled a GPT marketplace where users can access personalised artificial intelligence "apps" for tasks like teaching maths or designing stickers. The store will be released in 2024.
Smaller GPTs require less energy to use, and can also be trained on more specific data for more precise answers.
However, the company's chief operating officer Brad Lightcap told CNBC that "there's never one thing you can do with AI that solves that problem in full" and that the technology is still in its early stages.
(Reporting by Adam Smith, Editing by Zoe Tabary)
Context is powered by the Thomson Reuters Foundation Newsroom.
Our Standards: Thomson Reuters Trust Principles
Tags
- Disinformation and misinformation
- Facial recognition
- Content moderation
- Tech and inequality
- Tech regulation
- Social media
- Data rights
- Tech solutions