The Crisis Between the Emergencies
Non-emergency calls are overwhelming PSAPs. Traditional diversion strategies have hit their ceiling. A new generation of conversational AI is changing how cente...
Editor in Chief
AI tools in 911 are often described as “assistive,” but what that actually means isn’t always clear. In practice, the most successful deployments treat human involvement not as a safeguard of last resort, but as a core design requirement.
Emergency calls are messy. Audio quality varies. Information is incomplete. Callers are emotional, confused, or contradictory. AI systems can surface patterns or speed up review, but they cannot reliably resolve ambiguity on their own. When systems are treated as decision-makers rather than inputs, error rates and risk increase quickly.
Human-in-the-loop models work because they recognize this reality. AI highlights, flags, or summarizes—but a trained professional interprets, confirms, and decides. In QA workflows, that might mean using AI to narrow which calls need review, not to score performance automatically. In transcription, it means treating text as a draft, not a record.
Problems tend to arise when the human role is informal or undefined. If staff aren’t clear when to trust AI output and when to override it, responsibility quietly shifts to the system. That’s not automation—it’s abdication.
In 911, AI can support good decisions. It can’t replace accountability for them.
Non-emergency calls are overwhelming PSAPs. Traditional diversion strategies have hit their ceiling. A new generation of conversational AI is changing how cente...
Editor in Chief
NG911 enables things legacy systems never could: richer data, faster interoperability, and more flexible routing. That matters for AI—but not always in the ways...
Contributing Editor
Most automation in 911 doesn’t arrive all at once. It creeps.A tool is introduced to “assist.” Over time, staff begin to rely on its outputs because they’re fas...
Senior Editor