The Crisis Between the Emergencies
Non-emergency calls are overwhelming PSAPs. Traditional diversion strategies have hit their ceiling. A new generation of conversational AI is changing how cente...
Editor in Chief
Most automation in 911 doesn’t arrive all at once. It creeps.
A tool is introduced to “assist.” Over time, staff begin to rely on its outputs because they’re fast, consistent, and always available. What started as optional gradually becomes expected. Eventually, no one remembers the manual process it replaced—or whether it should have been replaced at all.
This shift often happens without formal policy changes. No one declares that responsibility has moved from human to system. It just… happens. QA reviewers trust automated flags. Supervisors assume alerts are comprehensive. Dispatchers expect recommendations to be correct.
The risk isn’t malicious design. It’s unexamined reliance.
Automation creep matters in emergency communications because accountability must remain explicit. If an AI system misses something, who is responsible? If no one can answer that clearly, the system has already been given too much authority.
Centers that manage this well tend to do a few things differently. They define what the tool is not allowed to do. They periodically review cases where AI was wrong or unhelpful. And they maintain manual skills even when automation works most of the time.
Automation should reduce workload—not quietly redefine responsibility.
Non-emergency calls are overwhelming PSAPs. Traditional diversion strategies have hit their ceiling. A new generation of conversational AI is changing how cente...
Editor in Chief
NG911 enables things legacy systems never could: richer data, faster interoperability, and more flexible routing. That matters for AI—but not always in the ways...
Contributing Editor
AI tools in 911 are often described as “assistive,” but what that actually means isn’t always clear. In practice, the most successful deployments treat human in...
Contributing Editor