Q&A: Call for 'moral courage' as AI expert charts road ahead

Interview
Vilas Dhar, president of the Patrick J. McGovern Foundation, is pictured in this undated photo. Patrick J. McGovern Foundation/handout via Thomson Reuters Foundation.
Interview

Vilas Dhar, president of the Patrick J. McGovern Foundation, is pictured in this undated photo. Patrick J. McGovern Foundation/handout via Thomson Reuters Foundation.

What’s the context?

Patrick J. McGovern Foundation President Vilas Dhar talks potential and risks of artificial intelligence (AI).

  • Vilas Dhar talks risks and potential of AI
  • UN eyes global action
  • Dhar urges 'human lens' to reap digital benefits

RICHMOND, Virginia - Vilas Dhar is the president of the U.S.-based Patrick J. McGovern Foundation, a global philanthropy that is working to expand the responsible use of artificial intelligence (AI) aimed at helping under-represented groups and promoting equity.

Dhar, through the foundation, has helped steer more than $500 million in commitments to groups advancing public health, education, climate action and democratic governance as it seeks to promote the responsible use of AI. 

He is scheduled to appear at Trust Conference, the Thomson Reuters Foundation's flagship gathering of leaders and experts.  

Dhar spoke with Context ahead of the conference about AI, where the public and private sectors fit into its development, and why he remains optimistic about the benefits of AI - despite its many dangers.

Here's what he had to tell us:

Looking at recent developments in AI, would you say AI has been more of a force for good or a source of division?

I'm a very hopeful person about AI, but I have to be careful. I'm optimistic about what technology promises, but only when it's directed by public, human, community-based interests.

Over the last five years we've seen a lot of evidence of the first part - of all the things that AI could do. 

But for us to build what AI should do requires us to lean in and bring a human-centric lens.

And I think if we do that, then we could really build a digital future that works for everyone.

An illustration of a mobile phone showing a chat between a user and an AI chatbot
Go DeeperHow Latam-GPT is building culturally relevant AI for the region
Technology leaders attend a generative AI (Artificial Intelligence) meeting in San Francisco, in California, U.S., June 29, 2023. REUTERS/Carlos Barria
Go DeeperQ&A: 'Digital lords,' AI's carbon footprint and greening tech
Go DeeperNew UK AI investment comes with a cost - your bills and our climate

When we talk about governance of AI and government policy, what might that look like? Are there good role models out there?

I'll give a very clear example, which is the Chilean government, in collaboration with a number of regional partners, has recently deployed the first Spanish language open-source large language model.

And it's made publicly available for anybody who wants to build AI infrastructure in Spanish-speaking regions.

It's a very good example of how governments can step in and bring public funding and financing to build what is ... a digital public good. 

It's a part of public infrastructure, it's free to use, it's open source, but it lets people build the things they need.    

You say India, not necessarily the obvious choice, is positioning itself as a global leader on public-interest AI. Can you tell us more?    

Rather than having private-sector models that are commercially accessible, India is working on building open-source models themselves and an ecosystem where private-sector players can build on top of those to create products and tools.

And because they have such good experience around this with all the work they've done in the past around the identity system (and) the payment system that they've built, there's a model here that's very different from the (U.S.) or Chinese models.

The American model, of course, is very centered on private- sector action (and) commercialization. 

The Chinese model (is) government first.

India is more saying 'how do we build the ecosystem and then let a variety of actors come in and build on top of it?'

Can you talk about recent AI developments at the U.N., and any takeouts for the wider world from its deliberations?  

The big thing is we need a mechanism by which society leads on questions of AI, and that's not a question of how we regulate code but how we set norms of conscience.

How do we ask fundamental questions like if we live in a world where AI will produce these massive economic benefits, (that) there's a mechanism by which we think about equitably distributing those benefits? About how we ensure that we build the platform and the application layer of AI that actually serves needs that aren't necessarily market-driven.

In order to create systemic change we need alignment at the macro level. 

The U.N. is a really promising platform where we bring together this multi-sectoral approach: governments that have the capacity to invest in public AI at scale, industry that kind of knows ... the emerging frontier technology, and civil society that can speak to the needs of communities and actually organize and deliver a targeted action.

Lastly, any big, closing thoughts we didn't touch on?

I'm going to say one last thing, which is, we're in a moment (in) time where all of the structures and infrastructure of our AI future are being set and decided right now, which means we have an opportunity to bring in the frameworks of what you've heard me say so many times - rights and norms and values and principles - and embed them in that firmament.

But there's also an urgency – because if we don't get this right in the next five years, then we set a very different path for humanity for the next decades.

So it requires of us a moral courage to bring rights, values and norms into AI decision-making and an urgency to ensure that they're embedded quickly before we (build) too much on top of it.

This interview has been edited for length and clarity.    

To hear more from Vilas Dhar on AI, register for a place at Trust Conference 2025 here.

(Reporting by David Sherfinski; Editing by Lyndsay Griffiths.)


Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles





Dataveillance: Your monthly newsletter for a watched world.

By providing your email, you agree to our Privacy Policy.


Latest on Context