Santiago “Santi” Garces, chief information officer (CIO) for the city of Boston, talked about his organization’s exploration of policy-building around the use of artificial intelligence (AI) technologies during a June 26 event organized by Route Fifty and GCN.

Boston has come a long since the early 1990s when I was Massachusetts CIO – and former Mayor Ray Flynn (no relation) appointed his first CIO, who just happened to be his senior advisor and political pollster. An unusual combination even for us veteran CIOs who have known government technology leaders with every kind of pedigree.

Boston Mayor Michelle Wu appointed Garces just over a year ago to oversee the Department of Innovation and Technology (DoIT) and to drive the city’s new technology agenda.

As the June 26 event, program hosts and sponsors explained that technology has become intrinsic to any successful, modern government. Across the nation, state and local governments (SLGs) as well as the Feds are increasingly looking to technology to improve service delivery, reduce costs, and create the optimal experience for employees and citizens alike.

Bolstered by a wave of Federal funding, these SLG leaders face a unique opportunity to leverage technology like never before. Doing so requires not only a deep understanding of the tools and technologies to be employed, but also of the issues to be addressed.

AI tech – and especially the large-language model generative AI like ChapGPT – was high on the list for discussion at the June 26 event.

Moderator Chris Teale, a reporter for Route Fifty and GCN, introduced Boston CIO Garces to explain the background and objectives of his department’s new citywide policy and guidelines.

“The generative AI found in applications like ChatGPT have been the subject of much hype in recent times … they promise more efficient organizations and streamline operations,” Teale said. “But how can state and local leaders prepare their teams for this technology and leverage its full potential,” he asked.

Garces explained that his organization started to look at several factors, the first being a high general awareness of the technology.

“We started to hear from a number of people through social media and in conversations about how useful they thought AI could become and how surprising – and in some cases – amazing” the technology could be, he said.  “There were things that were brought up – such as there were 86 percent of people that have used ChatGPT – even in the early days.”

Also prompting DoIT to action were risks associated with AI concerning the quality and accuracy of information, and risks associated with the technology more generally.

“The number of people using ChatGPT – I think that there were over a million users in less than five days,” Garces said. “It’s so pervasive, so we started to think there’s potential for harm. There’s potential for good use, and more importantly, there’s no way that we’re going to be able to ignore it.”

“We said, let’s start wrestling around, [and] under what conditions could we figure out how to make sense of that equation,” the CIO said.

That exploration began with understanding the need to embrace the uncertainty of the technology, which is changing rapidly, and with new uses being discovered every day.

The DoIT wanted to arrive at policy that was supportive, and gave clarity to city employees, Garces said. “We landed in this concept that we had to experiment responsibly, by creating safeguards that enabled us to manage the risk, [and] by enabling employees to interact with the tool so that they would understand what are the things that are useful, and what are the things that would be risky about the tool.”

The organization followed up on one of the core pieces of feedback. “We started speaking with academics and with community leaders and other people that we thought had interesting perspectives on the topic. A core feedback that they promoted from the get-go was: make it clear, make it useful and make it simple, no jargon. Think about this from the perspective of a busy government official that has some type stuff on their plate,” Garces said.

“So, we landed on three guidelines to manage the risks. The first one was making sure that people understood that these are tools,” he said.

His advice for other SLGs? Garces said, “Foremost, this is technology and its usage should not be ignored. It is so widely available, even if you think about what we can control from the government perspective, people [and] community organizers are going to be using the tool to engage and produce content. Other malicious actors are going to use it.”

“I think that engaging with the tools is really helpful and important,” he said. “I think that as you start working on putting together guidelines,” it’s important to have “clarity around what are some of these outcomes, what are the things that we want to preserve, and as important, what things we want to avoid,” the CIO said.

Read More About
About
John Thomas Flynn
John Thomas Flynn
John Thomas Flynn serves as a senior advisor for government programs at MeriTalk. He was the first CIO for the both the State of California and the Commonwealth of Massachusetts, and was president of NASCIO.
Tags