Supported By:Omidyar

What are AI 'hallucinations' and can they be stopped?

Attendees get their first look at the new Galaxy Book3 series laptops as Samsung Electronics unveils its latest flagship smartphones in San Francisco, California, U.S. February 1, 2023. REUTERS/Peter DaSilva
explainer

Attendees get their first look at the new Galaxy Book3 series laptops as Samsung Electronics unveils its latest flagship smartphones in San Francisco, California, U.S. February 1, 2023. REUTERS/Peter DaSilva

What’s the context?

AI hallucinations can cause serious problems in fields like healthcare and law but experts say they cannot be eliminated

  • AI produces false information, known as 'hallucinations'
  • 68% of large companies use AI despite the risks
  • Hallucinations can be reduced but not eliminated

LONDON - There's an elephant in the room when it comes to talking about artificial intelligence (AI) and it's the fact that sometimes it just makes things up and serves up these so-called hallucinations as facts. 

This happens with both commercial products like OpenAI's ChatGPT and specialised systems for doctors and lawyers, and it can pose a real-world threat in courtrooms, classrooms, hospitals and beyond, spreading mis- and disinformation.

Despite these risks, companies are keen to integrate AI into their work, with 68% of large companies incorporating at least one AI technology, according to British government research.

But why does AI hallucinate, and is it possible to stop it?

What is an AI hallucination?

Generative AI products, like ChatGPT, are built on large-language models (LLMs) and they work through 'pattern matching', a process whereby an algorithm looks for specific shapes, words, or other sequences in the input data, which might be a particular question or task.

But the algorithm does not know the meaning of the words. While it might have the facade of intelligence, what it does is perhaps closer to pulling Scrabble letters from a large bag, and learning what gets a positive response from the user.

These AI systems or products are trained on huge amounts of data but incomplete data or biases - like a missing letter or a bag full of Es - can result in hallucinations.

All AI models hallucinate; even the most accurate register factual inconsistencies 2.5% of the time, according to AI company Vectara's hallucination detection model

A woman checks her mobile phone inside the premises of the Supreme Court in New Delhi, India, September 28, 2018
Go DeeperAre AI chatbots in courts putting justice at risk?
An illustration picture shows a projection of text on the face of a woman in Berlin, June 12, 2013.  REUTERS/Pawel Kopczynski
Go DeeperWhat is ChatGPT? And will it steal our jobs?
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023
Go DeeperWhat are the environmental costs of AI?

Can AI hallucinations be dangerous?

Depending on where AI is used, the effect of hallucinations can range from farcical to severe. 

After Google struck a deal with social media platform Reddit to use its content to train its AI models, its Gemini tool started pulling incorrect advice or jokes from the site - including a recommendation to add glue to cheese to make it stick to pizza.

In courts, lawyers have cited non-existent cases generated by AI chatbots numerous times, and the World Health Organisation has warned against using AI LLMs for public healthcare - saying data used to reach decisions could be biased or inaccurate.

"(It is) even more important for institutions to have safeguards and continuous monitoring in place, including human intervention - in this case, radiologists or medical experts to validate findings - and explainable systems," Ritika Gunnar, a general manager of product management on data and AI at IBM, told Context.

How can hallucinations be reduced?

The risk of hallucinations can be reduced by improving the quality of the training data, using humans to verify and correct the output of AI, and ensuring a level of transparency about how the models work.

But these processes can be difficult to implement effectively as private companies are loath to relinquish their proprietary tools for inspection. 

Some large AI companies rely on poorly paid workers in the Global South, who label text, images, video and audio for use in everything from voice recognition assistants to face recognition to 3D image recognition for autonomous vehicles.

The hours are long and the work exhausting, exacerbated by lax labour regulations.

LLMs could also be fine-tuned to reduce the risk of hallucinations. One way of doing this is by using Retrieval-Augmented Generation, which bulks up AI's answers using external sources.

While this could be effective, according to AI company Service Now, it could carry a high financial cost due to the infrastructure required, such as cloud computing space, data acquisition, human managers, and more.

Instead of using LLMs, AI could also deploy smaller language models, which would reduce the risk of hallucinations because they can be trained on complete, specified data - akin to choosing an answer from three responses compared to 3,000.

Using these smaller models would also reduce AI's large environmental footprint.

However, experts from the National University of Singapore believe that hallucinations will never be completely abolished.

"It's challenging to eliminate AI hallucinations entirely, due to the nature of how models generate content," the researchers wrote in a paper published in January.

"An important, but not the only, reason for hallucination is that the problem is beyond LLMs' computation capabilities," they wrote. 

"For those problems, any answer except 'I don't know' is unreliable and suggests that LLMs have added premises implicitly during the generation process. It could potentially reinforce stereotypical opinions and prejudices towards under-represented groups and ideas."

(Reporting by Adam Smith; Editing by Clar Ni Chonghaile.)


Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles


Tags

  • AI

Free event

Trust Conference

22 October – 23 October 2024 | London

How has AI-generated disinformation affected elections? Journalist Kara Swisher takes the stage to talk challenges and regulation at the Thomson Reuters Foundation's annual forum.


TC Banner OrganisationTC Banner Organisation
Kara Swisher poses for a portrait in this undated handout photo. Handout via Thomson Reuters Foundation
Kara Swisher poses for a portrait in this undated handout photo. Handout via Thomson Reuters Foundation




Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context