Are AI chatbots in courts putting justice at risk?
A woman checks her mobile phone inside the premises of the Supreme Court in New Delhi, India, September 28, 2018. REUTERS/Anushree Fadnavis
What’s the context?
Judges from India to Colombia are using robot lawyers, but experts warn of pitfalls such as false information and algorithmic bias
- Judges are using chatbots to answer legal questions
- ChatGPT can give false or misleading data, experts say
- Critics warn of bias, privacy violation, data exploitation
LONDON/BOGOTA/LOS ANGELES - Indian High Court judge Anoop Chitkara has ruled over thousands of cases. But when he refused bail to a man accused of assault and murder, he turned to ChatGPT to help justify his reasoning.
He is among a growing number of justices using artificial intelligence (AI) chatbots to assist them in rulings, with supporters saying the tech can streamline court processes while critics warn it risks bias and injustice.
"AI cannot replace a judge ... However, it has immense potential as an aid in judicial processes," said Chitkara.
"The knowledge revolution has started, and these AI platforms have in certain situations demonstrated their capabilities to instantaneously transform queries into outstanding results."
Chatbots like ChatGPT and Google's Bard are software applications designed to mimic human conversation in response to users' questions.
Chitkara said he did not rely on ChatGPT to help decide his ruling in the 2020 case at the Punjab and Haryana High Court.
However, he wondered if he was relying too heavily on his own "consistent view" that allegations involving an unusually high level of cruelty should count against granting bail, and asked ChatGPT to summarise case law on the issue.
The justice ministry did not immediately respond to a request for comment.
The use of AI in the criminal justice system is growing quickly worldwide, from the popular DoNotPay chatbot lawyer mobile app to robot judges in Estonia adjudicating small claims and AI judges in Chinese courts.
In the Caribbean Colombian city of Cartagena, judge Juan Manuel Padilla also turned to ChatGPT for help in a lawsuit in which an autistic boy's parents were suing his healthcare provider for treatment costs and expenses.
"(ChatGPT) is generating text that is very reliable, very concrete, and applicable to a case in a specific way," said Padilla.
He asked the chatbot several legal questions such as whether an autistic child is exempt from fees for therapy. He included the details in his ruling, which sided in favour of the child.
Concerns over false results
But chatbots' reliability is questionable, said several legal and tech experts.
"Some judges are trying to find a way to make the job faster - but they don't always know the limits or risks," said Juan David Gutiérrez, professor of public policy and data at Universidad del Rosario in Bogota.
"ChatGPT can make up laws and rulings that don't exist. In my view it shouldn't be used for anything important."
There have been numerous examples of chatbots getting information wrong or making up plausible but incorrect answers - which have been dubbed "hallucinations" - such as inventing fictional articles and academic papers.
When ChatGPT was tested on its responses to 50 legal questions by Linklaters, a global law firm headquartered in London, legal experts found it proficient in some areas but severely lacking in others.
The AI confused sections of the Data Protection Act 2018, and failed to give complete answers on English contract law.
"If you didn't already have a very good understanding of that area of law, it would be very hard for you to work that out", solicitor Peter Church, an expert in data privacy at Linklaters, told Context.
Use of chatbot 'a disaster'
Better technology promises a way to alleviate the huge backlog that is clogging some legal systems.
But AI risks over-simplifying complex problems and could raise unrealistic expectations of tech's capabilities, Dona Mathew and Urvashi Aneja from the research collective Digital Futures Lab wrote in a recent report.
There are also concerns over privacy violations and exploitation of judicial data for profit.
"With biased and incomplete datasets, no legal remedies and accountability safeguards ... these changes can lead to systematic harms like threats to judicial independence and stagnation of legal principles," they wrote.
Raquel Guerrero, a lawyer for three journalists in Bolivia who were accused of posting photos of a victim of violence without their permission, expressed concerns when the court consulted ChatGPT during an online hearing in April.
Guerrero said the complainant gave permission for the photos to be shared online but later denied she had done so.
Constitutional judges asked ChatGPT about any "legitimate public interest" for journalists posting online photos of a "woman showing parts of her body" without her consent.
ChatGPT answered it was a "violation of the person's privacy and dignity." The judges ordered the photos to be removed from social media.
The court record said ChatGPT does not replace decisions made by jurists, but that it can be used as additional support to be able to "clarify certain concepts."
But Guerrero said the chatbot's use in the hearing was "arbitrary" and a "disaster."
"It can't be used as if it's a calculator that takes away the obligation of judges to use reason and to apply justice and to apply it correctly," Guerrero said, adding she is considering filing a complaint against the judges for using the chatbot.
"Obviously, ChatGPT doesn't stop being a robot. If you ask it in the right way, it will answer what you want to hear."
(Writing by Adam Smith, additional reporting by Anastasia Moloney in Bogota and Avi Asher-Schapiro in Los Angeles, Editing by Sonia Elks)
- Disinformation and misinformation
- Tech regulation
- Data rights