Cities draw up AI policies as US federal laws lag behind

Sensors are seen mounted on the windshield of a self-driving car during a self-racing cars event in Willows, California, U.S., April 1, 2017. REUTERS/Stephen Lam

Sensors are seen mounted on the windshield of a self-driving car during a self-racing cars event in Willows, California, U.S., April 1, 2017. REUTERS/Stephen Lam

What’s the context?

U.S local authorities scramble to put in place 'ethical' AI guidelines in absence of national laws

  • Boston, New York, Tempe introduce new AI policies
  • Cities work out policies as nationwide rules lacking
  • Concerns on bias, cybersecurity, prioritising efficiency

By the time Stephanie Deitrick started writing an AI policy for the city of Tempe, Arizona, she worried it was already too late.

"It had been on my mind as something that we really need to look at ... before we're hit with something we're not expecting," said Deitrick, the city's chief data and analytics officer. "And then ChatGPT was released."

Almost overnight after its launch in November last year, a technology with wide-ranging implications that Deitrick had been considering in theory became widely used by the public, she told Context.

She called the experience surreal. "It feels like everyone is racing against everyone else to see how to get more AI into what they're doing," Deitrick said.

Like many others in similar roles across the United States, Deitrick is now sprinting to catch up. In June the city council adopted an "Ethical AI Policy" that she spearheaded, and in October a new governance committee started meeting to hash out the city's future approach to AI tools.

RelatedBesides AI, regulation key to fight mis/disinformation
Students work on computers in the computer lounge at the campus of the University of New South Wales in Sydney, Australia, August 4, 2016
RelatedAs ChatGPT faces Australia crackdown, disabled students defend AI
RelatedThe deepfake elections are here

Both in the United States and beyond, cities are trying to put in place AI policies largely in the absence of national or transnational guidance, said Mona Sloane, an assistant professor of data science and media studies at the University of Virginia. She calls this local-level leadership "AI localism".

The U.S. cities of Boston, New York, Seattle and San Jose have all in recent months adopted guidelines and policies around AI and "generative" AI tools such as ChatGPT that allow for easy text-based commands.

In October, President Joe Biden issued an executive order creating standards around privacy, safety and rights related to the use of AI, and Senate Majority Leader Charles Schumer has been spearheading a series of lawmaker meetings on potential legislation.

But Congress has yet to pass an AI law, leaving local authorities to step in.

Kate Garman Burns is executive director of MetroLab Network, a nonprofit working with 45 local governments to create policy guidance by next summer.

"The question I got the most is, what are you hearing that other people are doing?" she said.

She said cities felt under pressure to act in the next six to 12 months to understand what the technology can do to improve city services – and what to beware of.

Prioritising efficiency or humanity?

In part that pressure is coming from the tech industry. A blog post from Microsoft on generative AI warns that "the public sector cannot remain frozen as AI changes the world around us."

ChatGPT creator Open AI did not respond to a request for comment on city efforts to craft guidance.

"This tech genie is out of the bottle," Garman Burns said.

"This is in the hands of the public, and cities are trying to figure out how to respond and be responsible with it."

For Deitrick, that meant emphasising the central role of people in the use, oversight and results of AI tools.

"I put the word human in there a lot," she said of Tempe's policy, "so we don't prioritise efficiency over basic human dignity."

Local initiatives can result in procurement rules or general transparency in a city's deployment of these tools, or in the regulation of specific uses – such as a new law in New York on AI in the hiring process, or around autonomous vehicles or facial recognition, said Sloane, of the University of Virginia.

Together these efforts will likely have knock-on effects for other places, Sloane said, creating "an environment of compliance practices that establish themselves as a standard that will affect an industry at large."

That means cities have an opportunity to be key test beds, said Milou Jansen, Amsterdam-based coordinator of the Cities Coalition for Digital Rights, a global network of municipalities helping each other in digital rights policymaking.

That includes testing whether these tools work and offer actual efficiencies, but also looking at "what is the impact on the neighborhood, and does it address the needs of citizens?," Jansen said.

"Right now, we're still discovering what kind of norms should be okay," she said. "Maybe we want (AI tools) to be used for traffic light optimisation, but not social security."

Some cities are also looking into temporarily halting the use of AI, she said.

A global database of locally led "ethical" AI initiatives called the Atlas of Urban AI lists 184 projects in 66 cities, including in Dubai, Helsinki, Mexico City and elsewhere.

These are scored in part on transparency, accountability, lack of discrimination and sustainability – the latter of which ranks poorest among the atlas's projects, said Alexandra Vidal, a researcher and project manager who helps lead the project at the Barcelona think-tank CIBOD.

So far, such initiatives are found more in the Global North, said Marta Galceran Vercher, a research fellow at CIBOD, but she noted that cities such as Barcelona, where officials have passed an explicit mandate around ethical AI, offer significant models.

"Cities are stepping out ahead of the national governments to say, 'We need to be more ambitious'," she said.

'Do a lot more with what we have'

While machine learning and text analytics are not new, tools driven by generative AI offer significant opportunities for cities, Jansen, Garman Burns and others emphasised.

In Williamsport, Pennsylvania, city council president Adam J. Yoder is excited by those prospects though sober about the risks, and the work required for a small town such as his, numbering fewer than 30,000 people, to take advantage of AI.

"This is a really interesting tool that can help us maximise our productivity, to do a lot more with what we have," he said, pointing to possible benefits including producing documentation or streamlining permitting.

Such efficiency could be especially useful as Williamsport and other towns deal with shrinking revenues while still needing to provide robust municipal services.

Yet Williamsport is only now in the midst of digitising its processes and education about these new tools will be critical, said Yoder, who is taking part in the national MetroLab discussions and looking forward to the guidance that results.

For now, the city has no policy to either introduce AI or guard against related risks such as data privacy or cybersecurity risks, he said.

But on his own Yoder is already using tools such as ChatGPT, including in his city work – summarising large documents, reviewing op-eds he has written, even as a starting point to crafting legislation.

"This really enhances my ability to be more effective in the time I offer to the city," said Yoder.

"It's really good as a starting point or a review tool. You just can't take what it gives you as gospel."

(Reporting by Carey L. Biron; editing by Jon Hemming.)


Context is powered by the Thomson Reuters Foundation Newsroom.

Our Standards: Thomson Reuters Trust Principles


Tags

  • Tech regulation


Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context