Supported By:Omidyar

Addressing AI's present and future: a false trichotomy?

Technology leaders attend a generative AI (Artificial Intelligence) meeting in San Francisco in California, U.S., June 29, 2023. REUTERS/Carlos Barria
opinion

Technology leaders attend a generative AI (Artificial Intelligence) meeting in San Francisco in California, U.S., June 29, 2023. REUTERS/Carlos Barria

In-fighting between AI camps is distracting from immediate safeguards needed now

By Mike Kubzansky, CEO, Omidyar Network 

This month, the United States signed a first-of-its-kind, bilateral agreement with the UK government on AI safety.

The partnership marks a step forward for AI policy, as the world’s governments shift from downloading the basics on AI (think Senator Schumer’s insight hearings) to preparing for an inevitable AI future.

But what that future looks like — and whether it is closer to the idyllic vision of human abundance that some are predicting or truer to the risks of human extinction that others warn about — depends largely on who you ask.

One group, known as existential risk, argues that policymakers should approach AI today with a focus on preventing its potential long-term, apocalyptic threats.

Effective accelerationists, on the other hand, believe generative AI will lead to a post-scarcity tech utopia.

And a third group (predominantly researchers and civil society organizations) prioritizes addressing generative AI’s immediate and known harms, from algorithmic bias to increasingly complex computer scams, to further polarization and discrimination from mis- and disinformation. 

A man casts his vote at a polling station during Argentina's runoff presidential election, in Tigre, on the outskirts of Buenos Aires, Argentina November 19, 2023
Go DeeperElections to job automation: Five AI trends to look out for in 2024
Go DeeperHow can AI and 3D printing address global housing challenges?
Go DeeperAt tech companies, let’s share the reins

I understand the pressure of balancing competing interests amidst rapid technological innovation. But the notion that our approach to generative AI must be singular – in other words, we must focus only on AI’s short-term risk, its long-term risk, or reject risk altogether – is false. 

The reality is that we can’t guard against the longer-term risks of AI, nor can we harness its immense potential, without confronting AI in the present. 

That means, first and foremost, we must take seriously the real-world biases and harms of generative AI.

By ensuring there is transparency and public oversight into generative AI models and applications, such as partnerships between private AI models and public oversight entities, we can reckon with algorithmic bias, discrimination, copyright infringement, data sovereignty violations, and other unintended outcomes now.

Of course, we shouldn’t reject the potential for long-term risk, nor concerns from existential risk advocates that generative AI has the potential for grave and deep implications for humanity (although of course the technology will need to improve substantially first). 

And effective accelerationists are right to believe in real competition, that is the power of open-source systems as a way to encourage collaboration and advance the technology and competition.

It can become too tempting to fall into an either/or/or mindset, particularly as advocates and entrepreneurs increasingly align themselves along informal factions at government hearings and conference stages.

This three-sided oppositional thicket is profoundly unhelpful because it distracts from developing the policy and governance framework that is desperately needed to channel and shape this powerful new technology.  

The reality is that technology does not exist in a vacuum. It’s part of society. And society needs guardrails in place not to shut technology down or hamper innovation, but rather to get the most out of it. And, as with prior technologies — from biomedicine to cars — we need guardrails and institutions to ensure that potential harms are minimized by design, and not after the fact.

We cannot be laissez-faire and let the markets solely decide AI’s future. Markets always pick profit over people; the incentives are too powerful and enticing to choose otherwise. We are living daily with the regret that we let markets run unchecked with social media. Let’s not repeat that mistake with generative AI.

Our most important task, therefore, is to install — and soon — a governance framework that can channel generative AI towards its most positive applications and consider its full societal ramifications.  

Ironically, at least two of the camps in this triad — those focused on existential risk and immediate harms — agree that guardrails are needed, and societal governance is critical, albeit for different reasons and to prevent different outcomes. While their cause is common, their infighting is distracting and unproductive.  

At the end of the day, building a responsible AI future means acting now. Crafting fair policies, building strong governance infrastructure, and developing the necessary capabilities today will not only help tackle the thorny challenges ahead, it will also lay the foundation for an equitable, more promising future in which generative AI is governed in service of society’s best interest, not the other way around.  

We may not know exactly how generative AI will evolve, but we can choose to invest in an inclusive participatory infrastructure now: One that prioritizes human impact, ensures meaningful and diverse decision making, and creates institutions and laws that balance innovation with regulation.

Building this infrastructure today will not just mitigate real-time harms, but it will also ensure we are prepared for the longer-term uncertainties ahead, not only from generative AI but other digital technologies coming down the road. And markets demand clear, consistent rules and guardrails to invest and function well.

Failing to enact any guardrails will lead to uncertainty on everything — from copyright protection to liability to downstream social effects — and will chill forward progress in a fog of confusion. 

This is a moment of great consequence. Making decisions now will build the foundation for greater human flourishing to come, but instead of an either/or/or mindset, we all must commit to developing AI in service of human values.  


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


Tags

  • AI
  • Entrepreneurship
  • Future of work
  • Corporate responsibility
  • Tech solutions



Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context