Blog

Why Local Governments Are Getting AI Wrong — And How to Get It Right

AI adoption in local government is accelerating faster than most people realize. So is the potential for it to go badly. Here's what's actually happening in city halls across America — and what it takes to deploy AI that residents can actually trust.

POLITY Team
8 min read
Share
City hall building

Something Shifted in 2026

For years, the conversation about AI in local government was mostly theoretical. Pilot programs. Task forces. White papers. Cities were curious but cautious, and the technology felt distant from the day-to-day reality of permitting staff, public works departments, and recreation coordinators trying to get through their inboxes.

That era is over.

In 2026, state and local government IT leaders are signaling a clear shift: AI is no longer an experimental curiosity but a core component of how modern municipalities operate. What was once “pilot purgatory” — small chatbots and niche analytics tools that rarely left the lab — has given way to genuine operational deployment. Agencies are embedding AI into resident-facing services, administrative workflows, and public information systems at a pace that would have seemed unrealistic just two years ago.

The demand is real. Staffing shortages are real. The expectation from residents for faster, more accessible government services is real. And AI, deployed thoughtfully, can address all three at once.

But “deployed thoughtfully” is doing a lot of work in that sentence.

The Problem No One Talks About Enough: Trust

When a resident interacts with their municipal government — whether they're asking about a permit, a park reservation, or a recycling rule — they're not just looking for information. They're interacting with a public institution they're supposed to be able to trust.

That trust is not a given. It's earned over time and lost quickly.

The most instructive example of what happens when municipalities get AI wrong happened in New York City. In October 2023, the city launched MyCity, an AI-powered chatbot built on Microsoft's Azure AI platform and trained on over 2,000 NYC web pages, with the promise of giving business owners “trusted information” about city services. The city spent more than $600,000 building it.

Within months, investigative reporting revealed the chatbot was telling businesses they could take workers' tips — illegal under New York law. That landlords could discriminate against tenants with Section 8 housing vouchers — also illegal. That stores were allowed to go cashless — directly contradicting a 2020 city law requiring cash acceptance. When asked for the minimum wage in New York City, the chatbot gave the wrong number.

The city left the chatbot online even after these failures were publicly documented. The new mayor ultimately announced in early 2026 that the tool would be shut down.

The MyCity story isn't a story about AI being bad. It's a story about what happens when AI is deployed without a commitment to accuracy as the non-negotiable first principle — and in a government context, where residents extend a higher level of trust to official sources than they would to any private company.

As one expert put it at the time: “There's a different level of trust that's given to government. Public officials need to consider what kind of damage they can do if someone was to follow this advice and get themselves in trouble.”

The Real Challenges Municipalities Face with AI Adoption

The MyCity failure gets the headlines, but the challenges facing local governments on AI are broader and more structural than any single bad deployment. Here's what's actually making AI adoption hard for municipalities:

The accuracy problem

Most general-purpose AI systems are trained to produce plausible-sounding responses — not to verify whether those responses are correct. In a commercial context, this is an inconvenience. In a government context, it's a liability. When a resident acts on bad information from their city's official website, there are real consequences: missed deadlines, failed applications, fines, legal exposure.

The solution isn't to avoid AI. It's to use AI that answers strictly from verified, official sources — and that clearly indicates when it doesn't have an answer rather than generating a plausible guess.

The “pilot purgatory” problem

For years, many municipalities were stuck evaluating AI tools without clear criteria for what good actually looks like. The marketplace is crowded, vendor claims are difficult to validate, and most local governments don't have the internal capacity to run rigorous testing before deployment. The result was either inaction or premature deployment of tools that weren't ready for public-facing use.

The policy vacuum

A recent analysis found that only 21 out of approximately 22,000 cities and counties across the United States have public-facing AI use policies — despite the fact that AI is already touching resident services in thousands of municipalities. Most are deploying AI tools without a published framework for how those tools are governed, what data they use, or who is accountable when they get something wrong.

This isn't recklessness. It's a capacity gap. Local governments are resource-constrained by design. They don't have large technology teams. They don't have AI ethics officers. They need solutions that are responsible by architecture — not responsible because someone on staff is watching every output.

The trust deficit with residents

A survey of residents across the U.S., Australia, and Spain found that while more than 75% of respondents were aware of AI technologies in their daily lives, roughly half had no idea their local governments were using AI in public services at all. And 68% said they had no idea local governments had — or could have — policies governing AI use.

Residents are not opposed to AI in local government. But they haven't been brought along. They find out AI is being used when something goes wrong — which is the worst possible introduction.

What “Getting It Right” Actually Looks Like

The good news is that the path to responsible, effective AI in local government isn't complicated. It just requires being clear about the problem you're solving and choosing tools that are designed for that specific problem.

For the most common and highest-value use case — helping residents find accurate information about local government services — the architecture that works is straightforward:

Train on official documents only.

The AI should answer exclusively from the municipality's own published materials: ordinances, codes, fee schedules, program guides, policies. Not from the internet. Not from general knowledge. Not from another municipality's rules that might not apply.

Cite every answer.

Every response should be traceable back to the specific document it came from. Residents and staff should be able to verify the source. This isn't just good practice — it's what makes AI defensible in a government context.

Say “I don't know” when the answer isn't there.

An AI that admits it doesn't have an answer is more trustworthy than one that generates a plausible guess. This is the fundamental failure of general-purpose AI in government settings — and it's entirely solvable with the right constraints.

Keep it narrow enough to be reliable.

The cities that have deployed chatbots successfully have done so by keeping the scope tightly defined. Los Angeles's CIO described their approach directly in the wake of the NYC failure: the city closely curated the content used by its chatbots. Scope discipline isn't a limitation. It's how you maintain accuracy.

Transparency Is the Product — Not a Feature

There's a framing problem in how many govtech vendors talk about AI: they lead with efficiency and cost savings, and treat accuracy and transparency as compliance checkboxes.

For municipalities, that's backwards.

The primary value of AI in local government — the reason it earns its place on a city's website — is that it gives residents accurate, trustworthy access to information their government is obligated to provide. The efficiency gains are real and meaningful. But they're a byproduct of doing the core job well, not the core job itself.

When a resident asks whether they need a permit to replace their roof, they're not looking for a fast answer. They're looking for the right answer. Speed is valuable only when accuracy is already assured.

This is why the architecture of a municipal AI tool matters as much as the interface. A system that pulls from official documents, cites its sources, and declines to guess when it doesn't know isn't just more accurate than a general-purpose chatbot. It's doing something categorically different: it's functioning as a transparency mechanism, not just a convenience tool.

It puts the municipality's own information — the ordinances, the codes, the policies residents have a right to access — within reach of any resident at any hour, in plain language, without friction.

That's not a chatbot. That's a more accessible government.

Where This Is Going

Local governments in 2026 are past the point of asking whether to use AI. The question now is how — and specifically, how to use it in ways that hold up under public scrutiny, protect residents from misinformation, and don't create new liability for the municipality.

The answers are increasingly clear. Narrow scope. Official sources only. Source-cited outputs. A system that knows what it doesn't know.

Municipalities that get this right in the next 12–24 months will have established a foundation of resident trust that makes future AI adoption — in more complex domains — significantly easier. Municipalities that get it wrong will spend years walking back the damage.

The standard isn't perfection. It's accountability. And in local government, accountability starts with making sure every answer you give a resident can be traced back to something real.

GovToKnow is an AI-powered resident assistant built by POLITY for municipal governments. Every answer GovToKnow provides is sourced directly from the municipality's own official documents — ordinances, codes, policies, and published records — and cited back to its source. GovToKnow does not use general internet knowledge or generate responses from outside a municipality's approved materials.

Learn more at govtoknow.com, or request a demo to see how it works on a real municipal website.