ChatGPT is changing the game -- but not without risks
People sit and work on their laptops at an office in Gurugram, India, June 13, 2023. REUTERS/Anushree Fadnavis
While undeniably powerful, generative AI tools also come with risks for the individuals and businesses who use them
Josh Lefkowitz is CEO and co-founder of risk intelligence firm Flashpoint. A former consultant to the FBI, he has spent the last two decades tracking and analyzing terrorist and cyber threat groups.
It's been less than a year since the artificial intelligence company OpenAI released ChatGPT, but it already boasts 173 million active users.
Many of them have employed ChatGPT to boost their productivity at work, using it for everything from answering simple questions and brainstorming ideas to drafting documents from scratch.
But while undeniably powerful, generative AI tools like ChatGPT, Dall-E, and Midjourney also come with risks for the individuals and companies who use them. A lot has been written about potential AI perils to society, with some commentators predicting that superintelligent machines could take over the world. But such hyperbolic scenarios are a distraction from much more immediate threats.
Some of the dangers come from the way AI tools are built. ChatGPT and similar programs are "taught" using vast amounts of information. To get it, developers send bots to scrape data from across the internet. AIs suck in material from all manner of websites and databases, including social media, with no vetting for accuracy.
That means that when you ask ChatGPT a factual question, it might give you a correct answer -- or it might not. The fine print warns that ChatGPT can "produce inaccurate information," but that hasn't stopped millions of people from using it for research, in some cases with dire professional consequences.
In May, a New York lawyer asked ChatGPT for examples of comparable cases to help a client's lawsuit against an airline. The lawyer submitted the results in court -- only to learn that they were made up. The judge called on the lawyer to explain himself, and he may face charges himself.
The lesson is that in any undertaking that requires accurate information, employees cannot depend on AI-generated answers. Yes, ChatGPT can point researchers in a general direction, but subsequent vetting is required.
Then there's the fact that ChatGPT is doing what most major internet companies do, and storing massive troves of customer information.
OpenAI doesn't sell user data -- yet. But remember the old Silicon Valley adage that if you aren't paying for the product, you're the product? It's hard to imagine that OpenAI, which currently provides its chatbot for free, won't eventually be tempted to monetize everything it knows about its users. Even if it doesn't, it could suffer a data breach. In either case, every question ever posed to it could spill out to a wider audience -- complete with information about who asked which questions. Companies and their employees should exercise caution about what information they share.
There's also a darker reality. Just as professionals of all kinds are exploring how AI can make their work easier, so are criminals. It's fun to ask ChatGPT to draft a note in the style of a Shakespearean sonnet. But novice fraudsters can also use it to draft notes in the style of banks, government agencies like the IRS, or even specific individuals -- making their false missives more convincing.
Forget about those "Hello Dearest" emails from "Nigerian princes" that only fooled the most gullible. In the future, Americans will face AI-generated phishing emails written to sound exactly like their bosses or coworkers.
ChatGPT can also help hackers plan cyberattacks. For instance, it can provide users with fully developed exploits, which are programs designed to take advantage of security flaws in computer systems. This kind of cybercrime is already a major problem, with more than 16 billion personal records stolen online in 2022. With the help of AI, we can expect to see many more thefts.
Finally, there's a broad threat to businesses that produce copyrighted work, from designers to media organizations to software developers. Just as AIs don't vet for accuracy, they don't seek copyright on the material they regurgitate.
For example, journalist Francesco Marconi asked ChatGPT what news sources it was trained on, and learned that the program takes information from at least 20 news organizations without their copyright or authorization.
Questions over what sources may be used to train AIs will likely be debated for decades, both publicly and in the courts. Indeed, the litigation has already started. In January, three artists sued the companies Stability AI and Midjourney for "infringing the rights of millions of artists" by developing artworks from images nabbed from the internet.
In the meantime, companies face real risks of having the copyrighted fruits of their labor stolen and monetized elsewhere. At the same time, those who use AI-generated information may inadvertently plagiarize others' work.
None of this is to suggest that businesses should stop using generative AI entirely. Generative AI has huge potential across multiple industries, from software development to cybersecurity.
Already, we're seeing an explosion of private AIs that operate as seamlessly as ChatGPT -- without compromising their users' data.
But to prevent future theft and exploitation, companies need to approach these new tools with caution.
Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.
Part of:AI and jobs: What does it mean for workers’ rights?
Updated: August 22, 2023
- Content moderation
- Tech regulation
Latest on Context